You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 15, 2023. It is now read-only.
Faced with next situation:
After cluster finished to create - only one node runs etcd-aws service.
On other two I see
Failed Units: 1
etcd-aws.service
journalctl -xe
May 06 12:28:46 ip-10-242-131-220.ec2.internal locksmithd[619]: [etcd.service etcd2.service] are inactive
May 06 12:28:46 ip-10-242-131-220.ec2.internal locksmithd[619]: Unlocking old locks failed: [etcd.service etcd2.service] are inactive. Retrying in 5m0s.
And only if I start service by hand( under root by executing systemctl start etcd-aws) etcd-aws service(and docker container) starts to work.
To recap:
Only one node start etcd-awd after CF deployment, 2 others need to perform to start etcd-aws service by hand.
Hi,
Faced with next situation:
After cluster finished to create - only one node runs etcd-aws service.
On other two I see
Failed Units: 1
etcd-aws.service
journalctl -xe
May 06 12:28:46 ip-10-242-131-220.ec2.internal locksmithd[619]: [etcd.service etcd2.service] are inactive
May 06 12:28:46 ip-10-242-131-220.ec2.internal locksmithd[619]: Unlocking old locks failed: [etcd.service etcd2.service] are inactive. Retrying in 5m0s.
And only if I start service by hand( under root by executing systemctl start etcd-aws) etcd-aws service(and docker container) starts to work.
To recap:
Only one node start etcd-awd after CF deployment, 2 others need to perform to start etcd-aws service by hand.
Any suggestions?