Skip to content
This repository was archived by the owner on Jan 15, 2023. It is now read-only.
This repository was archived by the owner on Jan 15, 2023. It is now read-only.

Only one node starts etcd-aws #7

@pboguk

Description

@pboguk

Hi,

Faced with next situation:
After cluster finished to create - only one node runs etcd-aws service.
On other two I see
Failed Units: 1
etcd-aws.service

journalctl -xe
May 06 12:28:46 ip-10-242-131-220.ec2.internal locksmithd[619]: [etcd.service etcd2.service] are inactive
May 06 12:28:46 ip-10-242-131-220.ec2.internal locksmithd[619]: Unlocking old locks failed: [etcd.service etcd2.service] are inactive. Retrying in 5m0s.

And only if I start service by hand( under root by executing systemctl start etcd-aws) etcd-aws service(and docker container) starts to work.

To recap:
Only one node start etcd-awd after CF deployment, 2 others need to perform to start etcd-aws service by hand.

Any suggestions?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions