From 05115e0107993fdff59a6b69d19b9d04505dbdfa Mon Sep 17 00:00:00 2001 From: Damian Zaremba Date: Thu, 5 Dec 2024 14:15:40 +0000 Subject: [PATCH] kubernetes-volumes - re-work example output Spinning up a clean cluster, the `civo-volume` storage class has `WaitForFirstConsumer` rather than `Immediate` volume binding. This means the volume is not created when the volume claim is created, but rather when the pod is scheduled. The `get pv` output is moved to after the `get pods` output, which will successfully show the relevant volume. --- content/docs/kubernetes/kubernetes-volumes.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/content/docs/kubernetes/kubernetes-volumes.md b/content/docs/kubernetes/kubernetes-volumes.md index 7bea17f..4e44d08 100644 --- a/content/docs/kubernetes/kubernetes-volumes.md +++ b/content/docs/kubernetes/kubernetes-volumes.md @@ -20,7 +20,7 @@ A [cluster running on Civo](./create-a-cluster.md) will have `civo-volume` as th kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer false 10m -civo-volume (default) csi.civo.com Delete Immediate false 10m +civo-volume (default) csi.civo.com Delete WaitForFirstConsumer false 10m ``` ## Creating a Persistent Volume Claim (PVC) @@ -49,13 +49,9 @@ $ kubectl create -f pvc.yaml persistentvolumeclaim/civo-volume-test created ``` -This will have created the PersistentVolume and claim: +This will have created the PersistentVolumeClaim: ```console -$ kubectl get pv -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-11509930-bf05-49ec-8814-62744e4606c4 3Gi RWO Delete Bound default/civo-volume-test civo-volume 2s - $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE civo-volume-test Bound pvc-11509930-bf05-49ec-8814-62744e4606c4 3Gi RWO civo-volume 13m @@ -97,6 +93,14 @@ NAME READY STATUS RESTARTS AGE civo-vol-test-pod 1/1 Running 0 54s ``` +And the associated volume, specified in the claim: + +```console +$ kubectl get pv +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-11509930-bf05-49ec-8814-62744e4606c4 3Gi RWO Delete Bound default/civo-volume-test civo-volume 2s +``` + ## Cordoning and deleting a node to show persistence If you cordon the node and delete the pod from above, you should be able to re-create it and have it spin up on a different node but attached to the pre-defined persistent volume.