If Honeycomb is started on a swarm worker it will simply fail to find any endpoints. I'm not sure of the best strategy to deal with this. Ideally it could be started as a global service and "just work" on any nodes that are managers.
Some options, loosely ordered by preference:
- Do not listen for HTTPS connections unless connected to a manager. This relies on Docker's load balancing being aware of this and not routing any requests to the container. I'm not sure if this is how it actually works when the container is otherwise considered healthy.
- Run as a replicate service and use a constraint to limit to managers. This is what we do currently, however it doesn't appear that services have "affinity" options, so we can't guarantee that the containers are spread across nodes like we can with a global service.
- Write a proxy for the docker managers that is exposed on the routing mesh. This suffers from the same problems, but since load on the docker API is internal and controlled there's less of a need to distribute it across nodes. It also doesn't matter so much if the API is unavailable for short periods while such a proxy container restarted, etc. I'd rather avoid needing multiple services, though.
- Shutdown with an error, maybe with a sleep first to prevent cycling too quickly. Need to work out if/how Docker will reschedule problematic containers on other nodes.
If Honeycomb is started on a swarm worker it will simply fail to find any endpoints. I'm not sure of the best strategy to deal with this. Ideally it could be started as a global service and "just work" on any nodes that are managers.
Some options, loosely ordered by preference: