You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tasks/administer-cluster/configure-upgrade-etcd.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -184,7 +184,7 @@ Before starting the restore operation, a snapshot file must be present. It can e
184
184
185
185
If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead.
186
186
187
-
If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.
187
+
If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.
188
188
189
189
## Upgrading and rolling back etcd clusters
190
190
@@ -212,7 +212,7 @@ Note that we need to migrate both the etcd versions that we are using (from 2.2.
212
212
to at least 3.0.x) as well as the version of the etcd API that Kubernetes talks to. The etcd 3.0.x
213
213
binaries support both the v2 and v3 API.
214
214
215
-
This document describes how to do this migration. If you want to skip the
215
+
This document describes how to do this migration. If you want to skip the
216
216
background and cut right to the procedure, see [Upgrade
217
217
Procedure](#upgrade-procedure).
218
218
@@ -227,7 +227,7 @@ There are requirements on how an etcd cluster upgrade can be performed. The prim
227
227
Upgrade only one minor release at a time. For example, we cannot upgrade directly from 2.1.x to 2.3.x.
228
228
Within patch releases it is possible to upgrade and downgrade between arbitrary versions. Starting a cluster for
229
229
any intermediate minor release, waiting until the cluster is healthy, and then
230
-
shutting down the cluster down will perform the migration. For example, to upgrade from version 2.1.x to 2.3.y,
230
+
shutting down the cluster will perform the migration. For example, to upgrade from version 2.1.x to 2.3.y,
231
231
it is enough to start etcd in 2.2.z version, wait until it is healthy, stop it, and then start the
232
232
2.3.y version.
233
233
@@ -239,7 +239,7 @@ The etcd team has provided a [custom rollback tool](https://git.k8s.io/kubernete
239
239
but the rollback tool has these limitations:
240
240
241
241
* This custom rollback tool is not part of the etcd repo and does not receive the same
242
-
testing as the rest of etcd. We are testing it in a couple of end-to-end tests.
242
+
testing as the rest of etcd. We are testing it in a couple of end-to-end tests.
243
243
There is only community support here.
244
244
245
245
* The rollback can be done only from the 3.0.x version (that is using the v3 API) to the
@@ -263,13 +263,13 @@ rollback might require restarting all Kubernetes components on all nodes.
263
263
**Note**: At the time of writing, both Kubelet and KubeProxy are using “resource
264
264
version” only for watching (i.e. are not using resource versions for anything
265
265
else). And both are using reflector and/or informer frameworks for watching
266
-
(i.e. they don’t send watch requests themselves). Both those frameworks if they
266
+
(i.e. they don’t send watch requests themselves). Both those frameworks if they
267
267
can’t renew watch, they will start from “current version” by doing “list + watch
268
268
from the resource version returned by list”. That means that if the apiserver
269
269
will be down for the period of rollback, all of node components should basically
270
270
restart their watches and start from “now” when apiserver is back. And it will
271
271
be back with new resource version. That would mean that restarting node
272
-
components is not needed. But the assumptions here may not hold forever.
272
+
components is not needed. But the assumptions here may not hold forever.
273
273
{: .note}
274
274
275
275
### Design
@@ -284,7 +284,7 @@ focus on them at all. We focus only on the upgrade/rollback here.
284
284
### New etcd Docker image
285
285
286
286
We decided to completely change the content of the etcd image and the way it works.
287
-
So far, the Docker image for etcd in version X has contained only the etcd and
287
+
So far, the Docker image for etcd in version X has contained only the etcd and
288
288
etcdctl binaries.
289
289
290
290
Going forward, the Docker image for etcd in version X will contain multiple
@@ -337,7 +337,7 @@ script works as follows:
337
337
1. Verify that the detected version is 3.0.x with the v3 API, and the
338
338
desired version is 2.2.1 with the v2 API. We don’t support any other rollback.
339
339
1. If so, we run the custom tool provided by etcd team to do the offline
340
-
rollback. This tool reads the v3 formatted data and writes it back to disk
340
+
rollback. This tool reads the v3 formatted data and writes it back to disk
341
341
in v2 format.
342
342
1. Finally update the contents of the version file.
343
343
@@ -350,7 +350,7 @@ Simply modify the command line in the etcd manifest to:
350
350
351
351
Starting in Kubernetes version 1.6, this has been done in the manifests for new
352
352
Google Compute Engine clusters. You should also specify these environment
353
-
variables. In particular,you must keep `STORAGE_MEDIA_TYPE` set to
353
+
variables. In particular,you must keep `STORAGE_MEDIA_TYPE` set to
354
354
`application/json` if you wish to preserve the option to roll back.
0 commit comments