Safe production releases through progressive delivery and automated rollback
Key Features β’ Architecture β’ Getting Started β’ CI Integration β’ Deployment Strategies
Releasing software frequently to production remains one of the hardest problems in modern infrastructure.
Traditional deployment pipelines often fail at the most critical moment β production rollout.
Teams commonly face issues such as breaking changes discovered too late, immediate full-traffic exposure to untested code, manual rollback under pressure, limited visibility into release health, and blind kubectl apply workflows with no automated recovery.
Alloy is a Kubernetes-native continuous deployment orchestration system built to address these challenges.
It sits between CI systems and Kubernetes, acting as an intelligent release controller that understands:
- what is being deployed
- how it should be released
- when it must be rolled back
By introducing progressive traffic control, continuous validation, and automatic rollback, Alloy enables safer production releases without slowing down delivery velocity.
CI builds artifacts. Alloy decides how they reach production.
The orchestrator ensures every deployment is verified progressively and can automatically fall back to a stable version when something goes wrong
- ποΈ Kubernetes-Native Design β Built with official
client-goSDK for real-time cluster communication - π‘οΈ Automatic Rollback on Failure β Detects issues and reverts to last known stable version instantly
- π Progressive Traffic Splitting β Gradually shifts traffic through canary stages (17% β 50% β 83% β 100%)
- π― Canary Deployments β Minimize blast radius with stage-based rollouts
- π Rolling Updates β Zero-downtime releases with built-in rollback support
- πΎ Stable Version Promotion β Persistent release memory ensures known good state at all times
- π Webhook-Driven β Native integration with any CI tool (GitHub Actions, GitLab CI, Jenkins, CircleCI)
- π API-First Control β RESTful API for centralized release orchestration
Alloy follows a controller-style architecture similar to native Kubernetes components.
βββββββββββββββ
β CI Tool β
β (Build/Test)β
ββββββββ¬βββββββ
β Webhook
βΌ
βββββββββββββββββββββββββββ
β Alloy Orchestrator β
β βββββββββββββββββββ β
β β Deployment β β ββββββββββββββββ
β β Engine βββββΌββββββΊβ PostgreSQL β
β β - Strategy β β β Database β
β β - Health checks β β β - Metadata β
β β - Traffic rules β β β - History β
β ββββββββββ¬βββββββββ β ββββββββββββββββ
βββββββββββββΌβββββββββββββ
β client-go SDK
βΌ
βββββββββββββββββββββββββββ
β Kubernetes API Server β
ββββββββββ¬βββββββββββββββββ
β
ββββββ΄ββββββ¬βββββββββββ
βΌ βΌ βΌ
ββββββββββ ββββββββββ ββββββββββ
β Canary β β Stable β βIngress/β
β Pods β β Pods β β LB β
ββββββββββ ββββββββββ ββββββββββ
- Trigger β CI system completes build/tests and sends webhook to Alloy
- Validate β Alloy verifies project and release metadata from database
- Orchestrate β Communicates with Kubernetes using
client-goSDK (not shell-based) - Monitor β Continuously observes live cluster state (pod readiness, availability)
- React β Makes deployment decisions in real-time without polling delays
- Decide β Promotes to stable or executes automatic rollback based on health data
The orchestrator does not replace Kubernetes β it controls Kubernetes resources intelligently.
All communication happens via the official Kubernetes API layer, allowing Alloy to:
- Read live pod status in real-time
- Monitor readiness and availability metrics
- Detect rollout failures immediately
- React without polling delays
Standard zero-downtime deployment that gradually replaces old pods with new ones while maintaining service availability.
Use case: Routine updates with minimal risk
Behavior:
- Supports rollback to previous ReplicaSet
- Maintains continuous availability
- Kubernetes-native strategy
Deploys new version alongside stable and routes a small percentage of traffic, gradually increasing exposure based on health metrics.
Use case: High-risk or critical production releases
Traffic Progression:
17% β 50% β 80% β 100%
If metrics degrade at any step, rollback is triggered automatically.
Alloy uses stage-based canary progression with automated health verification at each stage:
| Stage | Replicas | Traffic % | Observation Window |
|---|---|---|---|
| Stage 1 | 1 Pod | 17% | 3 minutes |
| Stage 2 | 3 Pods | 50% | 3 minutes |
| Stage 3 | 5 Pods | 83% | 2 minutes |
| Promotion | Full Cluster | 100% | Permanent |
var canaryStages = []CanaryStage{
{Replicas: 1, TrafficPct: 17, DurationMin: 3},
{Replicas: 3, TrafficPct: 50, DurationMin: 3},
{Replicas: 5, TrafficPct: 83, DurationMin: 2},
}Process:
- Deploy canary pods
- Route partial traffic
- Observe health metrics
- Increase exposure per stage
- Promote to stable on success
- Roll back immediately on failure
This staged approach minimizes blast radius while allowing fast promotion.
Once a new version passes all health checks:
- It becomes the new stable version
- Old stable is archived for rollback capability
- Guarantees a known good state at all times
Alloy integrates with any CI tool through a simple HTTP/HTTPS webhook. Once CI completes successfully, the pipeline notifies Alloy to begin deployment orchestration.
POST http://localhost:{PORT}/api/webhook/deploy
name: Deploy to Production
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Trigger Alloy Deployment
run: |
curl -X POST "http://http:{your_host}:{port}/api/webhook/deploy" \
-H "Content-Type: application/json" \
-d '{
"user_id": "${{ secrets.ALLOY_USER_ID }}",
"project_id": "${{ secrets.ALLOY_PROJECT_ID }}",
"image_tag": "${{ github.sha }}",
"commit_sha": "${{ github.sha }}",
"strategy": "auto",
"files": {
"secret": "'"$(cat k8s/secret.yaml | base64)"'",
"service": "'"$(cat k8s/service.yaml | base64)"'",
"deployment": "'"$(cat k8s/deployment.yaml | base64)"'"
}
}'curl -X POST "http://localhost:8080/api/webhook/deploy" \
-H "Content-Type: application/json" \
-d '{
"user_id": "USER_ID",
"project_id": "PROJECT_ID",
"image_tag": "v1.2.3",
"commit_sha": "abc123def",
"strategy": "auto",
"files": {
"secret": "<secret-yaml>",
"service": "<service-yaml>",
"deployment": "<deployment-yaml>"
}
}'| Field | Description | Required |
|---|---|---|
user_id |
User identifier stored in Alloy DB | β Yes |
project_id |
Project identifier stored in Alloy DB | β Yes |
image_tag |
Container image tag built by CI | β Yes |
commit_sha |
Git commit reference | β Yes |
strategy |
Deployment strategy (auto, rollout, canary) |
Optional (default: auto) |
files |
Kubernetes manifests (Secret, Service, Deployment) | β Yes |
Note: The
user_idandproject_idmust already exist in Alloy's database, created during Docker Compose setup.
The strategy field is optional and strongly recommended to be left as auto.
| Strategy | Behavior | Use Case |
|---|---|---|
auto β |
First deployment β rolloutSubsequent releases β canary |
Recommended for all production workloads |
rollout |
Immediate full traffic shift Old version replaced |
Bootstrap deployments, internal services |
canary |
Force canary deployment Progressive traffic increase |
High-risk releases requiring manual control |
The auto strategy provides the best balance:
- Fast bootstrapping for initial deployment
- Safety enforcement for subsequent releases
- Automatic risk assessment based on deployment history
- Kubernetes cluster (v1.24+)
- Docker and Docker Compose
- kubectl configured with cluster access
- PostgreSQL (included in Docker Compose)
1. Clone the repository
git clone https://github.com/alloy-go/Alloy-core.git
cd Alloy-core2. Configure environment
Create a .env file:
# Alloy Orchestrator Settings
APP_PORT=8080
APP_ENV=production
# Database Configuration
DB_HOST=postgres
DB_PORT=5432
DB_USER=alloy_admin
DB_PASSWORD=your_secure_password_here
DB_NAME=alloy_orchestrator3. Deploy with Docker Compose
docker-compose up -d4. Verify installation
# Check API health
curl http://localhost:8080/health
# Check database connection
docker-compose logs alloy-apiversion: '3.8'
services:
alloy-api:
image: alloy/orchestrator:latest
ports:
- "${APP_PORT}:8080"
environment:
- DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME}?sslmode=disable
depends_on:
- postgres
restart: unless-stopped
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
volumes:
- alloy_storage:/var/lib/postgresql/data
restart: unless-stopped
volumes:
alloy_storage:Deploying is mechanical β releasing is risky.
Alloy manages that risk through progressive delivery and automatic rollback.
Problem with kubectl:
- Blind updates with no health awareness
- No automatic rollback capability
- No traffic control
- No deployment memory
- Failure handling is completely manual
Alloy introduces decision-making on top of Kubernetes.
CI systems are designed to:
- β Build artifacts
- β Run tests
- β Push images
They are NOT designed to manage live production state.
CI tools lack:
- β Real-time Kubernetes awareness
- β Continuous observation loops
- β Progressive traffic control
- β Automated decision-making
Alloy separates concerns clearly:
- CI β Builds
- Alloy β Releases
Alloy is designed for CI-driven environments, where CI systems trigger deployments while Alloy controls the entire release process.
- π― CI-driven deployments environments
- π API-first control
- π§ Centralized release logic
- π‘οΈ Automatic recovery mechanisms
- π Persistent deployment history
graph TD
A[CI Builds Container Image] --> B[CI Triggers Alloy Webhook]
B --> C[Alloy Validates User & Project]
C --> D[Deployment Strategy Resolved]
D --> E[Kubernetes Manifests Applied]
E --> F[Alloy Watches Live Cluster Metrics]
F --> G[Traffic Adjusted Progressively]
G --> H{Health Check}
H -->|Success| I[Release Promoted to Stable]
H -->|Failure| J[Automatic Rollback to Previous Version]
This project is licensed under the MIT License β see the LICENSE file for details.