A bare-bones, containerized task automation system built with Python and Docker, inspired by Buildbot and ideas from GitHub Actions containerized self-hosted runners.
Key Features • Requirements • How To Use • Services • Related • License
▶️ Robust system to run Bash commands or scripts in isolated environments, delivering generated artifacts on demand- ⚙️ Containerized automation stack built with FastAPI, TaskIQ, and Redis, inspired by Buildbot and GitHub Actions runners
- 🔥 Lightweight, self-hosted task runner for containerized, reproducible job execution
- 🔄 Background task scheduling powered by TaskIQ with Redis as the message broker
- 🔌 FastAPI-based API server for external orchestration, status reporting, and integration
- 🔒 Isolated frontend and backend Docker networks for enhanced security and service separation
- 🚀 Managed with docker-compose and streamlined with an optional Makefile workflow
- 📦 Supports both local and containerized development setups with minimal friction
- 🐳 Privileged Docker-in-Docker container (DinD) enabling nested container orchestration inside jobs#
Before you begin, ensure that you have the following tools installed:
- Docker (v20.10 or higher)
- Docker Compose (v1.27 or higher)
- (Optional for devcontainers) GoLang Docker Credential Helper (v0.6.4 or higher)
Ensure that your system has access to the necessary network and storage configurations.
Follow the steps below to get the project running in your local environment.
Ensure you have a .env file in the root of your project directory (the default .env is available here. This file should contain any necessary environment variables required by the services.
A Makefile was created to simplify the setup and execution of the project. Here's how you can use it:
-
Set Up Environment: To set up the environment, install dependencies, and choose how to run the app, execute:
make build
This will:
- Check if Miniconda and Docker are installed.
- Prompts the user to confirm the installation of Miniconda and Docker if not found.
- Set up a Conda environment (
buildbot) with Python 3.11 if it doesn’t exist yet. - Install dependencies using Poetry.
- Prompts the user to choose whether to run the app via Docker Compose or manually.
- If running manually, it starts Redis, the TaskIQ scheduler, and the Buildbot API.
-
Run:
- If you want to run the app using Docker Compose, simply run:
make run_docker
- To run the app locally using FastAPI and Debugpy, run:
make run_local
- To build and choose how to run the app, run:
make run
- To stop the app, run:
make stop_local
- To stop the app using Docker Compose, run:
make stop_docker
With Docker Compose, you can build and run the services as follows:
docker-compose up --buildThis will build the images defined in the Dockerfile, start all the containers, and set up the services as per the docker-compose.yml file. The --build flag ensures that the containers are built before starting.
When you want to stop the services, simply run:
docker-compose downThis will stop and remove the containers. To remove containers and volumes, you can use:
docker-compose down -vNote that taskIQ must be running for the app to function properly. The scheduler service is started automatically in the background via a vscode pre-launch task (run-scheduler). To start the scheduler manually, run:
conda run -v --live-stream -n buildbot env PYTHONPATH=buildbot taskiq scheduler app.background.broker:schedulerThe project includes four main services:
- API - Exposes the Buildbot API and manages interactions.
- Job Manager - Handles background tasks and job scheduling.
- Redis - Acts as a message broker between the components.
- Docker (DinD) - Provides Docker-in-Docker functionality for tasks requiring Docker commands.
- A routine is needed to reschedule jobs that failed to enqueue. This can be achieved by querying Redis for pending jobs and invoking the Job Manager, similar to the Job Service.
- A cleanup routine could remove stray containers and prevent resource starvation. While containers are purged after artifact collection, a simple
docker container pruneevery hour would suffice for an MVP.
-
The design draws inspiration from GitHub Actions' container-based runners, utilizing Docker-Py and TaskIQ for job management. While Python’s
subprocess+user namespacesorchrootwere considered for job execution,docker-pywas chosen for its superior isolation, albeit at the cost of additional complexity. This decision was driven by the need to keep the job runner isolated from both the host machine and the API server.- Additionally,
docker-pyproved beneficial in streamlining the retrieval of job artifacts and capturing stdout/stderr from containers. - The design uses base64-encoded commands passed to a Docker container. The application decodes and formats the command into a
run.shscript, which is executed and then deleted to avoid residual files.
- Additionally,
-
Redis was selected as the data store due to its simplicity, the limited number of entities involved, and the straightforward relationship between jobs and tasks.
-
TaskIQ was adopted for scheduling and managing background jobs, ensuring API responsiveness while keeping TaskIQ workers isolated from FastAPI’s main thread.
-
The design emphasizes adherence to SOLID principles and clean code, particularly through the use of interfaces and constructor dependency injection. This approach ensures cohesion, testability, and scalability while maintaining alignment with the open-closed principle. However, it introduces additional complexity due to the reliance on abstract classes and constructor dependency injection.
- A microservice architecture was implemented to separate the API, job manager, and storage, enhancing scalability and maintainability. For simplicity, the job manager uses the same image as the API Server.
-
Although Docker-in-Docker (DinD) offers a viable development environment, its use in production environments, particularly in CI systems, has been discouraged. Jérôme Petazzoni, the tool’s author, highlighted these concerns in an article, suggesting that alternative approaches should be considered for production deployments.
- Decorators can reduce verbosity in methods by separating logging from business logic.
- More granular exception handling will improve troubleshooting. While critical methods are covered, exceptions from third-party packages often have vague messages and are caught far from their source. Using specific exceptions and defining clear error messages is recommended.
docker-pysupports real-time stdout/stderr capture withcontainer.logs(stream=True), which could enable a WebSocket endpoint indexed by job ID for a CI/CD-like experience. Logs are collected when artifacts are retrieved, but no endpoints currently expose them to users.
- s3rius/Fast-API-Template - FastAPI project template
- GitHub Actions - Self-hosted runners and workflow automation
- Buildbot - The CI automation framework
GitHub @brunohaf
