@@ -57,11 +57,12 @@ The following tools are needed:
57
57
58
58
$ minikube version
59
59
---
60
- minikube version: v1.12.3
61
- commit: 2243b4b97c131e3244c5f014faedca0d846599f5-dirty
60
+ minikube version: v1.17.1
61
+ commit: 043bdca07e54ab6e4fc0457e3064048f34133d7e
62
+
62
63
63
64
5. **kind ** (optional) is another tool for creating a local cluster. It
64
- can be used instead of the minicube . Installation instructions can be
65
+ can be used instead of the minikube. We need the version 0.6.0 or higher . Installation instructions can be
65
66
found
66
67
`here <https://kind.sigs.k8s.io/docs/user/quick-start/#installation >`_.
67
68
@@ -184,14 +185,19 @@ Create a Kubernetes cluster of version 1.16.4 with 4GB of RAM (recommended):
184
185
185
186
$ minikube start --kubernetes-version v1.16.4 --memory 4096
186
187
---
187
- 😄 minikube v1.12.3 on Ubuntu 18.10
188
- ✨ Automatically selected the docker driver. Other choices: kvm2, virtualbox
188
+ 😄 minikube v1.17.1 on Ubuntu 18.10
189
+ ✨ Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh
189
190
👍 Starting control plane node minikube in cluster minikube
191
+ 🚜 Pulling base image ...
190
192
🔥 Creating docker container (CPUs=2, Memory=4096MB) ...
191
- 🐳 Preparing Kubernetes v1.16.4 on Docker 19.03.8 ...
193
+ 🐳 Preparing Kubernetes v1.16.4 on Docker 20.10.2 ...
194
+ ▪ Generating certificates and keys ...
195
+ ▪ Booting up control plane ...
196
+ ▪ Configuring RBAC rules ...
192
197
🔎 Verifying Kubernetes components...
193
- 🌟 Enabled addons: default-storageclass, storage-provisioner
194
- 🏄 Done! kubectl is now configured to use "minikube"
198
+ 🌟 Enabled addons: storage-provisioner, default-storageclass
199
+ 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
200
+
195
201
196
202
Wait for the cluster state to be *Ready *:
197
203
@@ -850,6 +856,74 @@ storages. You can change the size of the allocated memory using the
850
856
error can also be resolved by increasing the size of the physical
851
857
cluster disk.
852
858
859
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
860
+ CrashLoopBackOff status
861
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
862
+
863
+ Pods do not start and have the status ``CrashLoopBackOff ``. In short,
864
+ this means that the container starts and crashes soon after due to an
865
+ error in the code.
866
+
867
+ .. code-block :: console
868
+
869
+ $ kubectl -n tarantool get pods
870
+ ---
871
+ NAME READY STATUS RESTARTS AGE
872
+ routers-0-0 0/1 CrashLoopBackOff 6 8m4s
873
+ storages-0-0 0/1 CrashLoopBackOff 6 8m4s
874
+ tarantool-operator-b54fcb6f9-2xzpn 1/1 Running 0 12m
875
+
876
+ Doing a kubectl describe pod will give us more information on that pod:
877
+
878
+ .. code-block :: console
879
+
880
+ $ kubectl -n tarantool describe pod routers-0-0
881
+ ---
882
+ Events:
883
+ Type Reason Age From Message
884
+ ---- ------ ---- ---- -------
885
+ ...
886
+ Normal Pulling 39m kubelet, minikube Pulling image "vanyarock01/test-app:0.1.0-1-g4577716"
887
+ Normal Pulled 39m kubelet, minikube Successfully pulled image "vanyarock01/test-app:0.1.0-1-g4577716"
888
+ Normal Created 37m (x5 over 39m) kubelet, minikube Created container pim-storage
889
+ Normal Pulled 37m (x4 over 39m) kubelet, minikube Container image "vanyarock01/test-app:0.1.0-1-g4577716" already present on machine
890
+ Normal Started 37m (x5 over 39m) kubelet, minikube Started container pim-storage
891
+ Warning BackOff 4m25s (x157 over 38m) kubelet, minikube Back-off restarting failed container
892
+
893
+ We see that the container cannot start. Rather, the container starts,
894
+ but after starting it stops due to an internal error. To understand what
895
+ is happening to him, let's see it's logs:
896
+
897
+ .. code-block :: console
898
+
899
+ $ kubectl -n tarantool logs routers-0-0
900
+ ---
901
+ 2021-02-28 15:18:59.866 [1] main/103/init.lua I> Using advertise_uri "routers-0-0.test-app.tarantool.svc.cluster.local:3301"
902
+ 2021-02-28 15:18:59.866 [1] main/103/init.lua I> Membership encryption enabled
903
+ 2021-02-28 15:18:59.963 [1] main/103/init.lua I> Probe uri was successful
904
+ 2021-02-28 15:18:59.964 [1] main/103/init.lua I> Membership BROADCAST sent to 127.0.0.1:3302
905
+ 2021-02-28 15:19:00.061 [1] main/103/init.lua I> Membership BROADCAST sent to 172.17.255.255:3302
906
+ 2021-02-28 15:19:00.062 [1] main/103/init.lua I> Membership BROADCAST sent to 127.0.0.1:3301
907
+ 2021-02-28 15:19:00.063 [1] main/103/init.lua I> Membership BROADCAST sent to 172.17.255.255:3301
908
+ 2021-02-28 15:19:00.064 [1] main/103/init.lua I> Membership BROADCAST sent to 127.0.0.1:3300
909
+ 2021-02-28 15:19:00.065 [1] main/103/init.lua I> Membership BROADCAST sent to 172.17.255.255:3300
910
+ 2021-02-28 15:19:00.066 [1] main/107/http/0.0.0.0:8081 I> started
911
+ 2021-02-28 15:19:00.069 [1] main/103/init.lua I> Listening HTTP on 0.0.0.0:8081
912
+ 2021-02-28 15:19:00.361 [1] main/108/remote_control/0.0.0.0:3301 I> started
913
+ 2021-02-28 15:19:00.361 [1] main/103/init.lua I> Remote control bound to 0.0.0.0:3301
914
+ 2021-02-28 15:19:00.362 [1] main/103/init.lua I> Remote control ready to accept connections
915
+ 2021-02-28 15:19:00.362 [1] main/103/init.lua I> Instance state changed: -> Unconfigured
916
+ 2021-02-28 15:19:00.365 [1] main/103/init.lua I> server alias routers-0-0
917
+ 2021-02-28 15:19:00.365 [1] main/103/init.lua I> advertise uri routers-0-0.test-app.tarantool.svc.cluster.local:3301
918
+ 2021-02-28 15:19:00.365 [1] main/103/init.lua I> working directory /var/lib/tarantool/test-app.routers-0-0
919
+ 2021-02-28 15:19:00.365 [1] main utils.c:1014 E> LuajitError: /usr/share/tarantool/test-app/init.lua:42: unhandled error
920
+ 2021-02-28 15:19:00.365 [1] main F> fatal error, exiting the event loop
921
+
922
+ We see that the application crashes with an error: ``unhandled error ``.
923
+ This is an example of an error. In reality, there can be any other error
924
+ that leads to the crash of the Tarantool instance. Fix the bug in the
925
+ application and update the application to the new version.
926
+
853
927
.. _cartridge_kubernetes_customization :
854
928
855
929
--------------------------------------------------------------------------------
0 commit comments