-
Notifications
You must be signed in to change notification settings - Fork 6
feat: Dynamic agent container spawning via Docker API #146
Description
feat: Dynamic agent container spawning via Docker API
Problem
Currently, copilot-bridge runs all bots (agents) as part of a single process. When moving to a containerised architecture, each agent should run in its own isolated container - with its own workspace, identity, and resource boundaries. There is currently no mechanism for the admin agent to spawn new agent containers on demand.
Motivation
In a multi-agent Docker architecture:
- Each agent should be isolated: its own filesystem, network access, and process space
- The admin agent (copilot-bridge running as the admin bot) should be able to create new agent containers without requiring a human to edit
docker-compose.ymland restart the stack - Agents should connect back to Mattermost but have no access to the Docker API or other agents' workspaces
- This mirrors how copilot-bridge already manages agents as separate sessions today, but with container-level isolation instead of process-level
Proposed Solution
Introduce a container driver concept in copilot-bridge, alongside the existing local (in-process) driver.
Config
Add a driver field to bot config:
{
"bots": {
"lal": {
"token": "{{ op://Vault/copilot-bridge/lal-token }}",
"driver": "docker",
"image": "ghcr.io/chrisromp/copilot-bridge:latest",
"network": "mattermost-net"
}
}
}How it works
When the admin bridge receives a request to activate a bot configured with driver: docker:
- It calls the Docker API (via the socket proxy) to create a container from the specified image
- The container is connected to
mattermost-netso it can reach Mattermost - The container receives its scoped config (bot token, workspace path) via Docker secrets and volume mounts
- The container is started - it runs a scoped copilot-bridge instance for that bot only
- The admin bridge tracks the container ID in SQLite for lifecycle management
Docker API access
The admin bridge connects to Docker via DOCKER_HOST environment variable pointing to the socket proxy (not the raw socket). The socket proxy limits which API calls are permitted (create, start, stop, inspect - no delete, no exec).
Agent container restrictions
Spawned agent containers:
- Connect to
mattermost-netonly (no Docker API access) - Have their own workspace volume mounted read/write
- Receive their bot token via Docker secret
- Do not share volumes with other agents or the admin bridge
Deliverables
- Design and implement
ContainerDriveralongside existingLocalDriverin session management -
DOCKER_HOSTenv var support in the bridge for pointing to a socket proxy - Container lifecycle tracking in SQLite (container ID, status, bot name)
- Config schema update:
driver,image,networkfields on bot entries - Admin agent tooling: new tool or slash command to spawn/stop agent containers
- Documentation: multi-agent Docker architecture guide
Non-Goals
- We are not removing or deprecating the local driver - single-process deployments remain fully supported
- Kubernetes support is out of scope for this issue
Open Questions
- Should the admin bridge auto-start agent containers on bridge startup, or only on demand?
- What happens to a running agent container if its config is removed from
config.json? - Should agents be able to signal the admin bridge (e.g. to request a restart)?
Reported By
Agent (automated) - drafted collaboratively with user raykao