diff --git a/docs/advanced-functions/external-storage.md b/docs/advanced-functions/external-storage.md index 703f7596..a1aa39f6 100644 --- a/docs/advanced-functions/external-storage.md +++ b/docs/advanced-functions/external-storage.md @@ -106,7 +106,7 @@ After the access point is configured, IOMesh can provide storage externally to a > **_NOTE_**: To use this function, the external Kubernetes cluster should be able to access IOMesh `DATA_CIDR` that was configured in [Prerequisites](../deploy-iomesh-cluster/prerequisites#network-requirements). -1. Set up [`open-iscsi`](../deploy-iomesh-cluster/setup-worker-node) on the worker nodes of the external Kubernetes cluster. +1. Set up [`open-iscsi`](../appendices/setup-worker-node) on the worker nodes of the external Kubernetes cluster. 2. Add the Helm chart repository. ```shell diff --git a/docs/advanced-functions/localpv-manager.md b/docs/advanced-functions/localpv-manager.md index ee024676..f37eed9c 100644 --- a/docs/advanced-functions/localpv-manager.md +++ b/docs/advanced-functions/localpv-manager.md @@ -275,7 +275,7 @@ The following example assumes that the local PV is created in the `/var/iomesh/l | `parameters.csi.storage.k8s.io/fstype ` | The filesystem type when the `volumeMode` is set to `Filesystem`, which defaults to `ext4`. | | `volumeBindingMode` | Controls when volume binding and dynamic provisioning should occur. IOMesh only supports `WaitForFirstConsumer`. | - When creating a StorageClass, you have the option to configure `deviceSelector` to filter disks as desired. For configuration details, refer to [`Device Selector`](../deploy-iomesh-cluster/setup-iomesh.md) and [Kubernetes Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). + When creating a StorageClass, you have the option to configure `deviceSelector` to filter disks as desired. For configuration details, refer to [`Device Selector`](../deploy-iomesh-cluster/install-iomesh) and [Kubernetes Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). For example, `iomesh.com/bd-driveType: SSD` means the StorageClass will only filter SSD for creating local PVs. diff --git a/docs/advanced-functions/manage-multiple-cluster.md b/docs/advanced-functions/manage-multiple-cluster.md index d69d478b..bdfdf002 100644 --- a/docs/advanced-functions/manage-multiple-cluster.md +++ b/docs/advanced-functions/manage-multiple-cluster.md @@ -18,7 +18,6 @@ IOMesh CSI Driver is shared by all IOMesh clusters to facilitate connection, whi - Verify that all requirements in [Prerequisites](../deploy-iomesh-cluster/prerequisites.md) are met. - The IOMesh version should be 1.0.0 or above. - A Kubernetes cluster consisting of at least 6 worker nodes. -- Verify that [`open-iscsi`](../deploy-iomesh-cluster/setup-worker-node) has been set up for each worker node. **Procedure** diff --git a/docs/appendices/release-notes.md b/docs/appendices/release-notes.md index d9376dda..2d5cbe00 100644 --- a/docs/appendices/release-notes.md +++ b/docs/appendices/release-notes.md @@ -4,10 +4,71 @@ title: Release Notes sidebar_label: Release Notes --- +## IOMesh 1.0.1 Release Notes +### What's New +#### New Feature +Automatically configures `open-iscsi` for all schedulable nodes in the Kubernetes cluster when deploying IOMesh or adding nodes to the IOMesh cluster. + +#### Resolved Issues +- The I/O performance was not as expected due to inconsistent port information between the `chunk` service registered with the `meta` service and the locally recorded port information. The issue has been resolved in this release. +- IOMesh was unable to run in Kubernetes clusters of 1.21 or earlier versions due to incompatibility between the `zookeeper-operator` version and Kubernetes version. The issue has been resolved in this release. +- In Kubernetes clusters 1.20 or earlier versions, when enabling the NDM webhook, the creation of BlockDevice and BlockDeviceClaim failed due to empty CR parameters. The issue has been resolved in this release. + +### Specifications + +| Component | Version| +| -------| -------| +|iomesh-operator|1.0.1| +|csi-driver| 2.6.0| +|zbs|5.3.0| +|zookeeper|3.5.9| +|node-disk-manager|1.8.0| +|hostpath-provisioner|0.5.3| +|block-device-monitor|0.1.0| +|local-pv-manager|0.1.0| + +### Compatibility + +#### Server Architecture Compatibility for IOMesh + +IOMesh is compatible with Intel x86_64, Hygon x86_64 or Kunpeng AArch64 architectures. + +#### Kubernetes and Linux OS Compatibility for IOMesh + + + + + + + + + + + + + + + + +
Software/OSVersion
Kubernetes +
    +
  • Kubernetes 1.17-1.25
  • +
  • OpenShift 3.11-4.10
  • +
Linux OS +
    +
  • CentOS 7
  • +
  • CentOS 8
  • +
  • CoreOS
  • +
  • RHEL 7
  • +
  • RHEL 8
  • +
  • OpenEluer 22.03 LTS
  • +
+ +>_NOTE_: IOMesh has no dependencies on the Linux OS version. The versions listed above are tested versions only. + ## IOMesh 1.0.0 Release Notes ### What's New - #### New Features **Installation & Deployment** - Adds support for deploying IOMesh on the Hygon x86_64 and Kunpeng AArch64 architectures. @@ -81,8 +142,8 @@ IOMesh is compatible with Intel x86_64, Hygon x86_64 or Kunpeng AArch64 architec Kubernetes @@ -167,8 +228,8 @@ IOMesh is compatible with Intel x86_64 and AMD x86_64 architectures. Kubernetes @@ -262,8 +323,8 @@ IOMesh Compatibility Kubernetes diff --git a/docs/deploy-iomesh-cluster/setup-worker-node.md b/docs/appendices/setup-worker-node.md similarity index 77% rename from docs/deploy-iomesh-cluster/setup-worker-node.md rename to docs/appendices/setup-worker-node.md index 1942cd84..f78c056a 100644 --- a/docs/deploy-iomesh-cluster/setup-worker-node.md +++ b/docs/appendices/setup-worker-node.md @@ -4,9 +4,7 @@ title: Set Up Worker Node sidebar_label: Set Up Worker Node --- -Once you have confirmed that all requirements in [Prerequisites](../deploy-iomesh-cluster/prerequisites) are met, set up `open-iscsi` for each worker node on which IOMesh will be installed and running. - -It is important to note that if you intend for IOMesh to provide storage to other nodes within this Kubernetes cluster, you will also need to set up `open-iscsi` for those nodes. +Before setting up `open-iscsi` for the worker nodes on which IOMesh will be installed and running, ensure all requirements in [Prerequisites](../deploy-iomesh-cluster/prerequisites) are met. 1. On the node console, run the following command to install `open-iscsi`. diff --git a/docs/cluster-operations/upgrade-cluster.md b/docs/cluster-operations/upgrade-cluster.md index 9b504616..b7ec5982 100644 --- a/docs/cluster-operations/upgrade-cluster.md +++ b/docs/cluster-operations/upgrade-cluster.md @@ -4,20 +4,20 @@ title: Upgrade Cluster sidebar_label: Upgrade Cluster --- -You have the option to upgrade the IOMesh cluster from version 0.11.1 to 1.0.0 either online or offline. Before proceeding, consider the following: +You have the option to upgrade the IOMesh cluster either online or offline. Before proceeding, consider the following: - Upgrade is not supported if the Kubernetes cluster has only 1 meta pod or 1 chunk pod. -- Due to the limitations of the Kubernetes CRD upgrade mechanism, the IOMesh cluster upgraded to 1.0.0 from 0.11.1 cannot run on the Kubernetes cluster of version 1.25 or above. +- Due to the limitations of the Kubernetes CRD upgrade mechanism, the IOMesh cluster upgraded to this release from 0.11.1 cannot run on the Kubernetes cluster of version 1.25 or above. -> _NOTE:_ -> There might be temporary I/O latency fluctuations during the upgrade. +> _NOTE:_ There might be temporary I/O latency fluctuations during the upgrade. + +## Upgrade from 0.11.1 to 1.0.1 -**Procedure** 1. Delete the default StorageClass. - IOMesh 1.0.0 has a different default StorageClass with updated parameters compared to the previous version. You just need to delete the old one and any PVC using it will not be impacted. + This release of IOMesh has a different default StorageClass with updated parameters compared to the previous version. You just need to delete the old one and any PVC using it will not be impacted. ```shell kubectl delete sc iomesh-csi-driver @@ -27,14 +27,14 @@ You have the option to upgrade the IOMesh cluster from version 0.11.1 to 1.0.0 e ```shell kubectl delete Validatingwebhookconfigurations iomesh-validating-webhook-configuration ``` -3. Install the CRD of IOMesh 1.0.0. +3. Install the CRD of IOMesh 1.0.1. ```shell kubectl apply -f https://iomesh.run/config/crd/iomesh.com_blockdevicemonitors.yaml ``` -4. Get the new fields and values added in IOMesh 1.0.0. +4. Get the new fields and values added in IOMesh 1.0.1. ```shell - wget https://iomesh.run/config/merge-values/v1.0.0.yaml -O merge-values.yaml + wget https://iomesh.run/config/merge-values/v1.0.1.yaml -O merge-values.yaml ``` 5. Edit the YAML file `merge-values.yaml`. Set the value of the `iomesh.edition` field to match the edition specified for the previous version of IOMesh. ```yaml @@ -45,9 +45,8 @@ You have the option to upgrade the IOMesh cluster from version 0.11.1 to 1.0.0 e 6. Upgrade the IOMesh cluster, which will merge new fields and values while keeping existing ones. Then wait for a few minutes till all pods are running. ```bash - helm upgrade --namespace iomesh-system iomesh iomesh/iomesh --version v1.0.0 --reuse-values -f merge-values.yaml + helm upgrade --namespace iomesh-system iomesh iomesh/iomesh --version v1.0.1 --reuse-values -f merge-values.yaml ``` - 7. Verify that all pods are in `Running` state. If so, then IOMesh has been successfully upgraded. ```bash watch kubectl get pod --namespace iomesh-system @@ -56,7 +55,7 @@ You have the option to upgrade the IOMesh cluster from version 0.11.1 to 1.0.0 e 1. Delete the default StorageClass. - IOMesh 1.0.0 has a different default StorageClass with updated parameters compared to the previous version. You just need to delete the old one and any PVC using it will not be impacted. + This release of IOMesh has a different default StorageClass with updated parameters compared to the previous version. You just need to delete the old one and any PVC using it will not be impacted. ```shell kubectl delete sc iomesh-csi-driver @@ -67,7 +66,9 @@ You have the option to upgrade the IOMesh cluster from version 0.11.1 to 1.0.0 e ```shell kubectl delete Validatingwebhookconfigurations iomesh-validating-webhook-configuration ``` -3. Download [IOMesh Offline Installation Package](../appendices/downloads). Make sure to replace `` with `v1.0.0` and `` based on your CPU architecture. +3. Download [IOMesh Offline Installation Package](../appendices/downloads) on each worker node and the master node. Then run the following command to unpack the installation package on each worker node and the master node. + + Make sure to replace `` with `v1.0.1` and `` based on your CPU architecture. Then refer to [Custom Offline Installation](../deploy-iomesh-cluster/install-iomesh.md#offline-installation) to load the IOMesh image on each worker node. - Hygon x86_64: `hygon-amd64` - Intel x86_64: `amd64` - Kunpeng AArch64: `arm64` @@ -75,7 +76,7 @@ You have the option to upgrade the IOMesh cluster from version 0.11.1 to 1.0.0 e ```shell tar -xf iomesh-offline--.tgz && cd iomesh-offline ``` -4. Install the CRD of IOMesh 1.0.0. +4. Install the CRD of IOMesh 1.0.1. ```shell kubectl apply -f ./configs/iomesh.com_blockdevicemonitors.yaml @@ -99,3 +100,54 @@ You have the option to upgrade the IOMesh cluster from version 0.11.1 to 1.0.0 e watch kubectl get pod --namespace iomesh-system ``` + +## Upgrade from 1.0.0 to 1.0.1 + + + +1. Temporarily disable IOMesh Webhook to avoid upgrade failure. Once the upgrade is successful, it will be automatically enabled again. + + ```shell + kubectl delete Validatingwebhookconfigurations iomesh-validating-webhook-configuration + ``` + +2. Upgrade the IOMesh cluster, which will keep existing fields and values. Then wait for a few minutes till all pods are running. + + ```bash + helm upgrade --namespace iomesh-system iomesh iomesh/iomesh --version v1.0.1 --reuse-values -f merge-values.yaml + ``` + +3. Verify that all pods are in `Running` state. If so, then IOMesh has been successfully upgraded. + ```bash + watch kubectl get pod --namespace iomesh-system + ``` + + +1. Temporarily disable IOMesh Webhook to avoid upgrade failure. Once the upgrade is successful, it will be automatically enabled again. + + ```shell + kubectl delete Validatingwebhookconfigurations iomesh-validating-webhook-configuration + ``` +3. Download [IOMesh Offline Installation Package](../appendices/downloads) on each worker node and the master node. Then run the following command to unpack the installation package on each worker node and the master node. + + Make sure to replace `` with `v1.0.1` and `` based on your CPU architecture. Then refer to [Custom Offline Installation](../deploy-iomesh-cluster/install-iomesh.md#offline-installation) to load the IOMesh image on each worker node. + + - Hygon x86_64: `hygon-amd64` + - Intel x86_64: `amd64` + - Kunpeng AArch64: `arm64` + + ```shell + tar -xf iomesh-offline--.tgz && cd iomesh-offline + ``` + +3. Upgrade the IOMesh cluster, which will merge new fields and values while keeping existing ones. Then wait for a few minutes till all pods are running. + + ```bash + ./helm upgrade --namespace iomesh-system iomesh ./charts/iomesh --reuse-values -f ./configs/merge-values.yaml + ``` + +4. Verify that all pods are in the `Running` state. If so, then IOMesh has been successfully upgraded. + ```bash + watch kubectl get pod --namespace iomesh-system + ``` + diff --git a/docs/deploy-iomesh-cluster/install-iomesh.md b/docs/deploy-iomesh-cluster/install-iomesh.md index e6536800..a3777aee 100644 --- a/docs/deploy-iomesh-cluster/install-iomesh.md +++ b/docs/deploy-iomesh-cluster/install-iomesh.md @@ -4,13 +4,13 @@ title: Install IOMesh sidebar_label: Install IOMesh --- -Before installing IOMesh, refer to the following to choose how you install IOMesh. +IOMesh can be installed on all Kubernetes platforms using various methods. Choose the installation method based on your environment. If the Kubernetes cluster network cannot connect to the public network, you can opt for custom offline installation. -- Quick Installation: One-click online installation with default parameter values that cannot be modified. -- Custom Installation: Configure parameters during installation, but ensure that your Kubernetes cluster is connected to the public network. -- Offline Installation: Recommended for Kubernetes clusters with no public network connectivity and supports custom parameter configuration during installation. +- One-click online installation: Use the default settings in the file without customizing parameters. +- Custom online installation: Supports custom parameters. +- Custom offline installation: Supports custom parameters. -## Quick Installation +## One-Click Online Installation **Prerequisite** - The CPU architecture of the Kubernetes cluster must be Intel x86_64 or Kunpeng AArch64. @@ -29,7 +29,7 @@ Before installing IOMesh, refer to the following to choose how you install IOMes ```shell # The IP address of each worker node running IOMesh must be within the same IOMESH_DATA_CIDR. - export IOMESH_DATA_CIDR=10.234.1.0/24; curl -sSL https://iomesh.run/install_iomesh.sh | sh - + export IOMESH_DATA_CIDR=10.234.1.0/24; curl -sSL https://iomesh.run/install_iomesh.sh | bash - ``` 3. Verify that all pods are in `Running` state. If so, then IOMesh has been successfully installed. @@ -40,7 +40,11 @@ Before installing IOMesh, refer to the following to choose how you install IOMes > _NOTE:_ IOMesh resources left by running the above commands will be saved for troubleshooting if any error occurs during installation. You can run the command `curl -sSL https://iomesh.run/uninstall_iomesh.sh | sh -` to remove all IOMesh resources from the Kubernetes cluster. -## Custom Installation + > _NOTE:_ After installing IOMesh, the `prepare-csi` Pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure `open-iscsi`. If the installation of `open-iscsi` is successful on all nodes, the system will automatically clean up the `prepare-csi` Pod. However, if the installation of `open-iscsi` fails on any node, [manual configuration of open-iscsi](../appendices/setup-worker-node) is required to determine the cause of the installation failure. + + > _NOTE:_ If `open-iscsi` is manually deleted after installing IOMesh, the `prepare-csi` Pod will not automatically start to install `open-iscsi` when reinstalling IOMesh. In this case, [manual configuration of open-iscsi](../appendices/setup-worker-node) is necessary. + +## Custom Online Installation **Prerequisite** @@ -129,6 +133,17 @@ Make sure the CPU architecture of your Kubernetes cluster is Intel x86_64, Hygon It is recommended that you only configure `values`. For more configurations, refer to [Pod Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). + - An optional step. Configure the `podDeletePolicy` field to determine whether the system should automatically delete the Pod and rebuild it on another healthy node when the Kubernetes node that hosts the Pod fails. This configuration applies only to the Pod with an IOMesh-created PVC mounted and the access mode set to `ReadWriteOnly`. + + If left unspecified, this field is set to `no-delete-pod` by default, indicating that the system won't automatically delete and rebuild the Pod in case of node failure. + ```yaml + csi-driver: + driver: + controller: + driver: + podDeletePolicy: "no-delete-pod" # Supports "no-delete-pod", "delete-deployment-pod", "delete-statefulset-pod", or "delete-both-statefulset-and-deployment-pod". + ``` + 6. On the master node, deploy the IOMesh cluster. ```shell @@ -201,8 +216,10 @@ Make sure the CPU architecture of your Kubernetes cluster is Intel x86_64, Hygon operator-87bb89877-kfs9d 1/1 Running 0 3m23s operator-87bb89877-z9tfr 1/1 Running 0 3m23s ``` + > _NOTE:_ After installing IOMesh, the `prepare-csi` Pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure `open-iscsi`. If the installation of `open-iscsi` is successful on all nodes, the system will automatically clean up the `prepare-csi` Pod. However, if the installation of `open-iscsi` fails on any node, [manual configuration of open-iscsi](../appendices/setup-worker-node) is required to determine the cause of the installation failure. -## Offline Installation + > _NOTE:_ If `open-iscsi` is manually deleted after installing IOMesh, the `prepare-csi` Pod will not automatically start to install `open-iscsi` when reinstalling IOMesh. In this case, [manual configuration of open-iscsi](../appendices/setup-worker-node) is necessary. +## Custom Offline Installation **Prerequisite** @@ -212,7 +229,7 @@ Make sure the CPU architecture of your Kubernetes cluster is Intel x86_64, Hygon 1. Download the [IOMesh Offline Installation Package](../appendices/downloads) on each worker node and the master node, based on your CPU architecture. -2. Unpack the installation package on each worker node and the master node. Make sure to replace `` with `v1.0.0` and `` based on your CPU architecture. +2. Unpack the installation package on each worker node and the master node. Make sure to replace `` with `v1.0.1` and `` based on your CPU architecture. - Hygon x86_64: `hygon-amd64` - Intel x86_64: `amd64` - Kunpeng AArch64: `arm64` @@ -303,6 +320,17 @@ Make sure the CPU architecture of your Kubernetes cluster is Intel x86_64, Hygon ``` It is recommended that you only configure `values`. For more configurations, refer to [Pod Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). + - An optional step. Configure the `podDeletePolicy` field to determine whether the system should automatically delete the Pod and rebuild it on another healthy node when the Kubernetes node that hosts the Pod fails. This configuration applies only to the Pod with an IOMesh-created PVC mounted and the access mode set to `ReadWriteOnly`. + + If left unspecified, this field is set to `no-delete-pod` by default, indicating that the system won't automatically delete and rebuild the Pod in case of node failure. + ```yaml + csi-driver: + driver: + controller: + driver: + podDeletePolicy: "no-delete-pod" # Supports "no-delete-pod", "delete-deployment-pod", "delete-statefulset-pod", or "delete-both-statefulset-and-deployment-pod". + ``` + 6. On the master node, deploy the IOMesh cluster. ```shell @@ -368,7 +396,9 @@ Make sure the CPU architecture of your Kubernetes cluster is Intel x86_64, Hygon operator-85877979-s94vz 1/1 Running 0 2m8s operator-85877979-xqtml 1/1 Running 0 2m8s ``` + > _NOTE:_ After installing IOMesh, the `prepare-csi` Pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure `open-iscsi`. If the installation of `open-iscsi` is successful on all nodes, the system will automatically clean up the `prepare-csi` Pod. However, if the installation of `open-iscsi` fails on any node, [manual configuration of open-iscsi](../appendices/setup-worker-node) is required to determine the cause of the installation failure. + > _NOTE:_ If `open-iscsi` is manually deleted after installing IOMesh, the `prepare-csi` Pod will not automatically start to install `open-iscsi` when reinstalling IOMesh. In this case, [manual configuration of open-iscsi](../appendices/setup-worker-node) is necessary. diff --git a/docs/deploy-iomesh-cluster/setup-iomesh.md b/docs/deploy-iomesh-cluster/setup-iomesh.md index 39c8b66f..b2b4e71a 100644 --- a/docs/deploy-iomesh-cluster/setup-iomesh.md +++ b/docs/deploy-iomesh-cluster/setup-iomesh.md @@ -22,17 +22,19 @@ IOMesh manages disks on Kubernetes worker nodes with OpenEBS [node-disk-manager( If successful, you should see output like this: ```output - NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE - blockdevice-097b6628acdcd83a2fc6a5fc9c301e01 kind-control-plane /dev/vdb1 ext4 107373116928 Unclaimed Active 10m - blockdevice-3fa2e2cb7e49bc96f4ed09209644382e kind-control-plane /dev/sda 9659464192 Unclaimed Active 10m - blockdevice-f4681681be66411f226d1b6a690270c0 kind-control-plane /dev/sdb 1073742336 Unclaimed Active 10m + NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE + blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sda1 ext4 16000900661248 Unclaimed Active 92d + blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sdb 16000900661248 Unclaimed Active 92d + blockdevice-f26f5b30099c20b1f6e993675614c301 iomesh-node-17-18 /dev/sdb 16000900661248 Unclaimed Active 92d + blockdevice-8b697bad8a194069fbfd544e6db2ddb8 iomesh-node-17-19 /dev/sdc 16000900661248 Unclaimed Active 92d + blockdevice-a3579a64869f799a623d3be86dce7c59 iomesh-node-17-18 /dev/sdc 16000900661248 Unclaimed Active 92d ``` > _NOTE:_ > The field `FSTYPE` of each IOMesh block device should be blank. > _NOTE:_ - > The status of a block device will only be updated when the disk is unplugged. Therefore, if a disk is partitioned or formatted, its status will not be immediately updated. To update information about disk partitioning and formatting, run the command `kubectl delete pod -n iomesh-system -l app=openebs-ndm` to restart the NDM pod, which will trigger a disk scan. + > The status of a block device will only be updated when the disk is plugged or unplugged. Therefore, if a disk is partitioned or formatted, its status will not be immediately updated. To update information about disk partitioning and formatting, run the command `kubectl delete pod -n iomesh-system -l app=openebs-ndm` to restart the NDM pod, which will trigger a disk scan. 2. View the details of a specific block device object. Make sure to replace `` with the block device name. @@ -49,16 +51,16 @@ IOMesh manages disks on Kubernetes worker nodes with OpenEBS [node-disk-manager( internal.openebs.io/uuid-scheme: gpt generation: 1 labels: - iomesh.com/bd-devicePath: dev.sda + iomesh.com/bd-devicePath: dev.sdb iomesh.com/bd-deviceType: disk iomesh.com/bd-driverType: SSD iomesh.com/bd-serial: 24da000347e1e4a9 iomesh.com/bd-vendor: ATA - kubernetes.io/hostname: kind-control-plane + kubernetes.io/hostname: iomesh-node-17-19 ndm.io/blockdevice-type: blockdevice ndm.io/managed: "true" namespace: iomesh-system - name: blockdevice-3fa2e2cb7e49bc96f4ed09209644382e + name: blockdevice-648c1fffeab61e985aa0f8914278e9d0 # ... ``` Labels with `iomesh.com/bd-` are created by IOMesh and will be used for the device selector. @@ -78,14 +80,14 @@ Before configuring device map, familiarize yourself with the mount type and devi **Mount Type** |Mode|Mount Type| |---|---| -|`hybridFlash`|You must configure 2 mount types: `cacheWithJournal` and `dataStore`.
  • `cacheWithJournal` serves the performance layer of storage pool and **MUST** be a partitionable block device with a capacity greater than 60 GB. 2 partitions will be created: one for journal and the other for cache. Either SATA or NVMe SSD is recommended.
  • `dataStore` is used for the capacity layer of storage pool. Either SATA or SAS HDD is recommended.
  • | -|`allflash`|

    You only need to configure 1 mount type: `dataStoreWithJournal`.

    `dataStoreWithJournal` is used for the capacity layer of storage pool. It **MUST** be a partitionable block device with a capacity greater than 60 GB. 2 partitions will be created: one for `journal` and the other for `dataStore`. Either `SATA` or `NVMe SSD` is recommended.| +|`hybridFlash`|Must configure `cacheWithJournal` and `dataStore`.
  • `cacheWithJournal` serves the performance layer of storage pool and **MUST** be a partitionable block device with a capacity greater than 60 GB. Two partitions will be created: one for journal and the other for cache. Either SATA or NVMe SSD is recommended.
  • `dataStore` is used for the capacity layer of storage pool. Either SATA or SAS HDD is recommended.
  • | +|`allflash`|

    Only need to configure `dataStoreWithJournal`.

    `dataStoreWithJournal` is used for the capacity layer of storage pool. It **MUST** be a partitionable block device with a capacity greater than 60 GB. Two partitions will be created: one for `journal` and the other for `dataStore`. Either SATA or NVMe SSD is recommended.| **Device Selector** |Parameter|Value|Description| |---|---|---| -|selector | [metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#labelselector-v1-meta) | The label selector to filter block devices. | -|exclude|[block-device-name]| The block device to be excluded from being mounted. | +|selector | [metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#labelselector-v1-meta) | The label selector to filter block devices. | +|exclude|[block-device-name]| The block device to be excluded. | For more information, refer to [Kubernetes Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). @@ -102,10 +104,10 @@ For more information, refer to [Kubernetes Labels and Selectors](https://kuberne spec: chunk: ``` -2. Copy and paste `deviceMap` contents from the following sample code. Fill in `mount-type` according to the deployment mode and configure `matchLabels` or `matchExpressions` and `exclude`. - - For the labels, you only need to choose one of the two fields `matchLabels` or `matchExpressions` to fill in. To know the keys and values of the block device, refer to Step 2 in [View Block Device Objects](#view-block-device-objects). +2. Configure `deviceMap`. Specifically, copy and paste the `deviceMap` content from the following sample code and fill in fields `mount-type`, `matchLabels`, `matchExpressions`, and `exclude` based on your deployment mode and block device information. Label information `` and `` can be obtained from Step 2 in [View Block Device Objects](#view-block-device-objects). > _NOTE:_ The field `FSTYPE` of each IOMesh block device should be blank. Make sure to exclude the block device that has a specified filesystem. + + > _NOTE:_ It is strictly prohibited to use disk names in the `deviceMap` in a production environment. ```yaml spec: @@ -123,71 +125,147 @@ For more information, refer to [Kubernetes Labels and Selectors](https://kuberne exclude: - # Enter the block device name to exclude it. ``` - If the deployment mode is `hybrid`, refer to the following example: +3. Verify that the `CLAIMSTATE` of the block devices you select becomes `Claimed`. - ```yaml - spec: - # ... - chunk: - # ... - deviceMap: - cacheWithJournal: - selector: - matchLabels: - iomesh.com/bd-deviceType: disk - matchExpressions: - - key: iomesh.com/bd-driverType - operator: In - values: - - SSD - exclude: - - blockdevice-097b6628acdcd83a2fc6a5fc9c301e01 - dataStore: - selector: - matchExpressions: - - key: iomesh.com/bd-driverType - operator: In - values: - - HDD - exclude: - - blockdevice-097b6628acdcd83a2fc6a5fc9c301e01 - # ... + ```bash + kubectl --namespace iomesh-system -o wide get blockdevice ``` - If the deployment mode is `allflash`, refer to the following example: - ```yaml - spec: - # ... - chunk: - # ... - deviceMap: - dataStoreWithJournal: - selector: - matchLabels: - iomesh.com/bd-deviceType: disk - matchExpressions: - - key: iomesh.com/bd-driverType - operator: In - values: - - SSD - exclude: - - blockdevice-097b6628acdcd83a2fc6a5fc9c301e01 - # ... + If successful, you should see output like this: + + ```output + NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE + blockdevice-f001933979aa613a9c32e552d05a704a iomesh-node-17-19 /dev/sda1 ext4 16000900661248 Unclaimed Active 92d + blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sdb 16000900661248 Claimed Active 92d + blockdevice-f26f5b30099c20b1f6e993675614c301 iomesh-node-17-18 /dev/sdb 16000900661248 Claimed Active 92d + blockdevice-8b697bad8a194069fbfd544e6db2ddb8 iomesh-node-17-19 /dev/sdc 16000900661248 Claimed Active 92d + blockdevice-a3579a64869f799a623d3be86dce7c59 iomesh-node-17-18 /dev/sdc 16000900661248 Claimed Active 92d + blockdevice-a6652946c90d5c3fca5ca452aac5b826 iomesh-node-17-18 /dev/sdd 16000900661248 Unclaimed Active 92d ``` - Once configured, block devices filtered out will be mounted on the IOMesh cluster. +## DeviceMap Examples -3. Verify that the `CLAIMSTATE` of the block devices you select becomes `Claimed`. +Below are three `deviceMap` examples based on all-flash and hybrid-flash deployment modes. Assuming a Kubernetes cluster has six block devices, the details are as follows: - ```bash - kubectl --namespace iomesh-system -o wide get blockdevice +```output +NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE +blockdevice-f001933979aa613a9c32e552d05a704a iomesh-node-17-19 /dev/sda1 ext4 16000900661248 Unclaimed Active 92d +blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sdb 16000900661248 Unclaimed Active 92d +blockdevice-f26f5b30099c20b1f6e993675614c301 iomesh-node-17-18 /dev/sdb 16000900661248 Unclaimed Active 92d +blockdevice-8b697bad8a194069fbfd544e6db2ddb8 iomesh-node-17-19 /dev/sdc 16000900661248 Unclaimed Active 92d +blockdevice-a3579a64869f799a623d3be86dce7c59 iomesh-node-17-18 /dev/sdc 16000900661248 Unclaimed Active 92d +blockdevice-a6652946c90d5c3fca5ca452aac5b826 iomesh-node-17-18 /dev/sdd 16000900661248 Unclaimed Active 92d +``` + +You can filter the block devices to be used in IOMesh based on the labels of the block devices. + +**Example 1: Hybrid Configuration `deviceMap`** + +In this example, all SSD disks in the Kubernetes cluster are used as `cacheWithJournal`, and all HDD disks are used as `dataStore`. The block devices `blockdevice-a6652946c90d5c3fca5ca452aac5b826` and `blockdevice-f001933979aa613a9c32e552d05a704a` are excluded from the selection. + +```yaml +spec: + # ... + chunk: + # ... + deviceMap: + cacheWithJournal: + selector: + matchLabels: + iomesh.com/bd-deviceType: disk + matchExpressions: + - key: iomesh.com/bd-driverType + operator: In + values: + - SSD + exclude: + - blockdevice-a6652946c90d5c3fca5ca452aac5b826 + dataStore: + selector: + matchExpressions: + - key: iomesh.com/bd-driverType + operator: In + values: + - HDD + exclude: + - blockdevice-f001933979aa613a9c32e552d05a704a + # ... +``` +Note that after the configuration is complete, any additional SSD or HDD disks added to the nodes later will be immediately managed by IOMesh. If you do not want this automatic management behavior, refer to [Example 2: Hybrid Configuration `deviceMap`](#example-2-hybrid-configuration-devicemap) for how to create a custom label for disks. + +**Example 2: Hybrid Configuration `deviceMap`** + +In this example, the block devices located at the `/dev/sdb` path in the Kubernetes cluster are used as `cacheWithJournal`, and the block devices located at the `/dev/sdc` path are used as `dataStore`. + +Based on the information of the block devices provided above, the block devices under the `/dev/sdb` and `/dev/sdc` paths are as follows: + +Block devices under `/dev/sdb` path: +- `blockdevice-648c1fffeab61e985aa0f8914278e9d0` +- `blockdevice-f26f5b30099c20b1f6e993675614c301` + +Block devices under `/dev/sdc` path: +- `blockdevice-8b697bad8a194069fbfd544e6db2ddb8` +- `blockdevice-a3579a64869f799a623d3be86dce7c59` + +1. Run the following commands to create a custom label for the block devices under the `/dev/sdb` path in the Kubernetes cluster. `mountType` is the key of the label, and `cacheWithJournal` is the value of the label. + ```shell + kubectl label blockdevice blockdevice-648c1fffeab61e985aa0f8914278e9d0 mountType=cacheWithJournal -n iomesh-system + kubectl label blockdevice blockdevice-f26f5b30099c20b1f6e993675614c301 mountType=cacheWithJournal -n iomesh-system ``` - If successful, you should see output like this: +2. Run the following commands to create a custom label for the block devices under the `/dev/sdc` path in the Kubernetes cluster. `mountType` is the key of the label, and `dataStore` is the value of the label. - ```output - NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE - blockdevice-097b6628acdcd83a2fc6a5fc9c301e01 kind-control-plane /dev/vdb1 ext4 107373116928 Unclaimed Active 11m - blockdevice-3fa2e2cb7e49bc96f4ed09209644382e kind-control-plane /dev/sda 9659464192 Claimed Active 11m - blockdevice-f4681681be66411f226d1b6a690270c0 kind-control-plane /dev/sdb 1073742336 Claimed Active 11m + ```shell + kubectl label blockdevice blockdevice-8b697bad8a194069fbfd544e6db2ddb8 mountType=dataStore -n iomesh-system + kubectl label blockdevice blockdevice-a3579a64869f799a623d3be86dce7c59 mountType=dataStore -n iomesh-system ``` + +After the labels are created, the configuration of `deviceMap` is as follows: + +```yaml +spec: + # ... + chunk: + # ... + deviceMap: + cacheWithJournal: + selector: + matchExpressions: + - key: mountType + operator: In + values: + - cacheWithJournal + dataStore: + selector: + matchExpressions: + - key: mountType + operator: In + values: + - dataStore + # ... +``` + +**Example 3: All-Flash Configuration `deviceMap`** + +In this example, all SSD disks in the Kubernetes cluster are used as `dataStoreWithJournal`. The block device `blockdevice-a6652946c90d5c3fca5ca452aac5b826` is excluded from the selection. +```yaml +spec: +# ... +chunk: + # ... + deviceMap: + dataStoreWithJournal: + selector: + matchLabels: + iomesh.com/bd-deviceType: disk + matchExpressions: + - key: iomesh.com/bd-driverType + operator: In + values: + - SSD + exclude: + - `blockdevice-a6652946c90d5c3fca5ca452aac5b826` + # ... +``` +Note that after the configuration is complete, any additional SSD or HDD disks added to the nodes later will be immediately managed by IOMesh. If you do not want this automatic management behavior, refer to [Example 2: Hybrid Configuration `deviceMap`](#example-2-hybrid-configuration-devicemap) for how to create a custom label for disks. + diff --git a/docs/volume-operations/encrypt-pv.md b/docs/volume-operations/authenticate-pv.md similarity index 60% rename from docs/volume-operations/encrypt-pv.md rename to docs/volume-operations/authenticate-pv.md index 07b10eec..2cb7043b 100644 --- a/docs/volume-operations/encrypt-pv.md +++ b/docs/volume-operations/authenticate-pv.md @@ -1,33 +1,33 @@ --- -id: encrypt-pv -title: Create Encrypted PV -sidebar_label: Create Encrypted PV +id: authenticate-pv +title: Create PV with Authentication +sidebar_label: Create PV with Authentication --- -IOMesh allows for volume encryption using Kubernetes secret. Encryption is implemented per StorageClass, and it requires configuration of a secret for encryption along with a CSI secret for authentication in the StorageClass. +IOMesh allows for volume authentication using Kubernetes secret. Authentication is implemented per StorageClass, and it requires configuration of a secret for authentication along with a secret providing authentication information in the StorageClass. -Every time a pod declares the use of a encrypted PVC, the PVC will only be allowed for use if the two secrets match exactly. +Every time a pod declares the use of a authenticated PVC, the PVC will only be allowed for use if the two secrets match exactly. **Precautions** - Since authentication is implemented through iSCSI CHAP, the secret password must be 12-16 characters long according to the password length requirements of the CHAP protocol. - To ensure that a StorageClass with configured secrets is accessible only to intended users, Role-Based Access Control (RBAC) is necessary. This is because StorageClass is not an object limited to a specific namespace, and some versions of Kubernetes allow all namespaces to access all StorageClasses. **Procedure** -1. Create a secret that holds credentials for encrypting a volume. +1. Create a secret for PV authentication. In the following command, replace `iomesh` with the username, `iomesh-system` with the namespace where your IOMesh cluster resides, and `abcdefghijklmn` with the password. ```bash kubectl create secret generic volume-secret -n iomesh-system --from-literal=username=iomesh --from-literal=password=abcdefghijklmn ``` -1. Create a CSI secret that points to the encrypted secret from step 1. Its username and password remain the same as the secret in step 1. +2. Create a secret that provides authentication information. Its username and password remain the same as the secret in step 1. ```bash kubectl create secret generic user-secret -n user-namespace --from-literal=username=iomesh --from-literal=password=abcdefghijklmn ``` -2. Create a StorageClass. Enable volume encryption and specify the secrets as instructed below. +3. Create a StorageClass. Enable PV authentication and specify the secrets as instructed below. ```yaml - # Source: encrypt-sc.yaml + # Source: authenticate-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: @@ -39,34 +39,34 @@ Every time a pod declares the use of a encrypted PVC, the PVC will only be allow csi.storage.k8s.io/fstype: "ext4" replicaFactor: "2" thinProvision: "true" - # Enable PV encryption. + # Enable PV authentication. auth: "true" - # The secret holding credentials for encrypting a volume, which will be fetched by the CSI reading in the `annotations` field of the PVC. + # The secret for PV authentication. csi.storage.k8s.io/controller-publish-secret-name: volume-secret csi.storage.k8s.io/controller-publish-secret-namespace: iomesh-system - # The CSI secret that points to the encrypted secret. + # The secret that provides authentication information, which will be fetched from the `annotation` field of the PVC. csi.storage.k8s.io/node-stage-secret-name: ${pvc.annotations['iomesh.com/key']} csi.storage.k8s.io/node-stage-secret-namespace: ${pvc.namespace} ``` ```shell - kubectl apply -f encrypt-sc.yaml + kubectl apply -f authenticate-sc.yaml ``` |Field|Description| |---|---| - |`csi.storage.k8s.io/controller-publish-secret-name`| The secret created in Step 1, holding credentials for encryption.| - |`csi.storage.k8s.io/controller-publish-secret-namespace`|The namespace where the encryption secret resides.| - |`csi.storage.k8s.io/node-stage-secret-name`|The CSI secret created in Step 2, pointing to and verifying the encrypted secret. | - |`csi.storage.k8s.io/node-stage-secret-namespace`|The namespace where the CSI secret resides.| + |`csi.storage.k8s.io/controller-publish-secret-name`| The secret created in Step 1 for PV authentication.| + |`csi.storage.k8s.io/controller-publish-secret-namespace`|The namespace where the secret in Step 1 resides.| + |`csi.storage.k8s.io/node-stage-secret-name`|The secret created in Step 2, which provides authentication information. | + |`csi.storage.k8s.io/node-stage-secret-namespace`|The namespace where the secret in Step 2 resides.| -3. Create a PVC and specify `annotations.iomesh.com/key` with the CSI secret created in Step 2. +4. Create a PVC and specify `annotations.iomesh.com/key` with the secret created in Step 2. ```yaml - # Source: encrypt-pvc.yaml + # Source: authenticate-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: user-pvc namespace: user-namespace - # Specify the CSI secret created in Step 2. + # Specify the secret created in Step 2. annotations: iomesh.com/key: user-secret spec: @@ -78,5 +78,5 @@ Every time a pod declares the use of a encrypted PVC, the PVC will only be allow storage: 2Gi ``` ```shell - kubectl apply -f encrypt-pvc.yaml + kubectl apply -f authenticate-pvc.yaml ``` \ No newline at end of file diff --git a/website/sidebars.json b/website/sidebars.json index b8b66e14..544e0311 100644 --- a/website/sidebars.json +++ b/website/sidebars.json @@ -6,7 +6,6 @@ ], "Deploy IOMesh": [ "deploy-iomesh-cluster/prerequisites", - "deploy-iomesh-cluster/setup-worker-node", "deploy-iomesh-cluster/install-iomesh", "deploy-iomesh-cluster/setup-iomesh", "deploy-iomesh-cluster/activate-license" @@ -14,7 +13,7 @@ "Volume Operations": [ "volume-operations/create-storageclass", "volume-operations/create-pv", - "volume-operations/encrypt-pv", + "volume-operations/authenticate-pv", "volume-operations/expand-pv", "volume-operations/clone-pv" ], @@ -46,6 +45,7 @@ "Appendices": [ "appendices/release-notes", "appendices/downloads", + "appendices/setup-worker-node", "appendices/iomesh-metrics", "appendices/faq" ]