Skip to content

alloy-go/Alloy-core

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

68 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Alloy logo

Alloy - Kubernetes-Native Continuous Deployment Orchestration

Safe production releases through progressive delivery and automated rollback

Key Features β€’ Architecture β€’ Getting Started β€’ CI Integration β€’ Deployment Strategies


Overview

Releasing software frequently to production remains one of the hardest problems in modern infrastructure.

Traditional deployment pipelines often fail at the most critical moment β€” production rollout.

Teams commonly face issues such as breaking changes discovered too late, immediate full-traffic exposure to untested code, manual rollback under pressure, limited visibility into release health, and blind kubectl apply workflows with no automated recovery.

Alloy is a Kubernetes-native continuous deployment orchestration system built to address these challenges.

It sits between CI systems and Kubernetes, acting as an intelligent release controller that understands:

  • what is being deployed
  • how it should be released
  • when it must be rolled back

By introducing progressive traffic control, continuous validation, and automatic rollback, Alloy enables safer production releases without slowing down delivery velocity.

Core Philosophy

CI builds artifacts. Alloy decides how they reach production.

The orchestrator ensures every deployment is verified progressively and can automatically fall back to a stable version when something goes wrong


πŸš€ Key Features

  • πŸŽ›οΈ Kubernetes-Native Design β€” Built with official client-go SDK for real-time cluster communication
  • πŸ›‘οΈ Automatic Rollback on Failure β€” Detects issues and reverts to last known stable version instantly
  • πŸ“Š Progressive Traffic Splitting β€” Gradually shifts traffic through canary stages (17% β†’ 50% β†’ 83% β†’ 100%)
  • 🎯 Canary Deployments β€” Minimize blast radius with stage-based rollouts
  • πŸ”„ Rolling Updates β€” Zero-downtime releases with built-in rollback support
  • πŸ’Ύ Stable Version Promotion β€” Persistent release memory ensures known good state at all times
  • πŸ”— Webhook-Driven β€” Native integration with any CI tool (GitHub Actions, GitLab CI, Jenkins, CircleCI)
  • 🌐 API-First Control β€” RESTful API for centralized release orchestration

πŸ— Architecture

Alloy follows a controller-style architecture similar to native Kubernetes components.

High-Level Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   CI Tool   β”‚
β”‚ (Build/Test)β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚ Webhook
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Alloy Orchestrator    β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚ Deployment      β”‚   β”‚      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  β”‚ Engine          │◄──┼─────►│ PostgreSQL   β”‚
β”‚  β”‚ -  Strategy      β”‚   β”‚      β”‚ Database     β”‚
β”‚  β”‚ -  Health checks β”‚   β”‚      β”‚ -  Metadata   β”‚
β”‚  β”‚ -  Traffic rules β”‚   β”‚      β”‚ -  History    β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚ client-go SDK
            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Kubernetes API Server   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚
    β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β–Ό          β–Ό          β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Canary β”‚ β”‚ Stable β”‚ β”‚Ingress/β”‚
β”‚  Pods  β”‚ β”‚  Pods  β”‚ β”‚  LB    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜

How It Works

  1. Trigger β€” CI system completes build/tests and sends webhook to Alloy
  2. Validate β€” Alloy verifies project and release metadata from database
  3. Orchestrate β€” Communicates with Kubernetes using client-go SDK (not shell-based)
  4. Monitor β€” Continuously observes live cluster state (pod readiness, availability)
  5. React β€” Makes deployment decisions in real-time without polling delays
  6. Decide β€” Promotes to stable or executes automatic rollback based on health data

Key Architectural Benefits

The orchestrator does not replace Kubernetes β€” it controls Kubernetes resources intelligently.

All communication happens via the official Kubernetes API layer, allowing Alloy to:

  • Read live pod status in real-time
  • Monitor readiness and availability metrics
  • Detect rollout failures immediately
  • React without polling delays

🚦 Deployment Strategies

1. Rolling Deployment

Standard zero-downtime deployment that gradually replaces old pods with new ones while maintaining service availability.

Use case: Routine updates with minimal risk

Behavior:

  • Supports rollback to previous ReplicaSet
  • Maintains continuous availability
  • Kubernetes-native strategy

2. Progressive Canary Deployments

Deploys new version alongside stable and routes a small percentage of traffic, gradually increasing exposure based on health metrics.

Use case: High-risk or critical production releases

Traffic Progression:

17% β†’ 50% β†’ 80% β†’ 100%

If metrics degrade at any step, rollback is triggered automatically.

Canary Traffic Model

Alloy uses stage-based canary progression with automated health verification at each stage:

Stage Replicas Traffic % Observation Window
Stage 1 1 Pod 17% 3 minutes
Stage 2 3 Pods 50% 3 minutes
Stage 3 5 Pods 83% 2 minutes
Promotion Full Cluster 100% Permanent

Canary Execution Flow

var canaryStages = []CanaryStage{
    {Replicas: 1, TrafficPct: 17, DurationMin: 3},
    {Replicas: 3, TrafficPct: 50, DurationMin: 3},
    {Replicas: 5, TrafficPct: 83, DurationMin: 2},
}

Process:

  1. Deploy canary pods
  2. Route partial traffic
  3. Observe health metrics
  4. Increase exposure per stage
  5. Promote to stable on success
  6. Roll back immediately on failure

This staged approach minimizes blast radius while allowing fast promotion.


3. Stable Promotion

Once a new version passes all health checks:

  • It becomes the new stable version
  • Old stable is archived for rollback capability
  • Guarantees a known good state at all times

πŸ”— CI Integration (Webhook Based)

Alloy integrates with any CI tool through a simple HTTP/HTTPS webhook. Once CI completes successfully, the pipeline notifies Alloy to begin deployment orchestration.

Webhook Endpoint

POST http://localhost:{PORT}/api/webhook/deploy

Example: GitHub Actions

name: Deploy to Production

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Trigger Alloy Deployment
        run: |
          curl -X POST "http://http:{your_host}:{port}/api/webhook/deploy" \
            -H "Content-Type: application/json" \
            -d '{
              "user_id": "${{ secrets.ALLOY_USER_ID }}",
              "project_id": "${{ secrets.ALLOY_PROJECT_ID }}",
              "image_tag": "${{ github.sha }}",
              "commit_sha": "${{ github.sha }}",
              "strategy": "auto",
              "files": {
                "secret": "'"$(cat k8s/secret.yaml | base64)"'",
                "service": "'"$(cat k8s/service.yaml | base64)"'",
                "deployment": "'"$(cat k8s/deployment.yaml | base64)"'"
              }
            }'

Request Payload

curl -X POST "http://localhost:8080/api/webhook/deploy" \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "USER_ID",
    "project_id": "PROJECT_ID",
    "image_tag": "v1.2.3",
    "commit_sha": "abc123def",
    "strategy": "auto",
    "files": {
      "secret": "<secret-yaml>",
      "service": "<service-yaml>",
      "deployment": "<deployment-yaml>"
    }
  }'

Request Fields

Field Description Required
user_id User identifier stored in Alloy DB βœ… Yes
project_id Project identifier stored in Alloy DB βœ… Yes
image_tag Container image tag built by CI βœ… Yes
commit_sha Git commit reference βœ… Yes
strategy Deployment strategy (auto, rollout, canary) Optional (default: auto)
files Kubernetes manifests (Secret, Service, Deployment) βœ… Yes

Note: The user_id and project_id must already exist in Alloy's database, created during Docker Compose setup.


πŸŽ›οΈ Deployment Strategy Options

The strategy field is optional and strongly recommended to be left as auto.

Strategy Behavior Use Case
auto ⭐ First deployment β†’ rollout
Subsequent releases β†’ canary
Recommended for all production workloads
rollout Immediate full traffic shift
Old version replaced
Bootstrap deployments, internal services
canary Force canary deployment
Progressive traffic increase
High-risk releases requiring manual control

Why auto is Recommended

The auto strategy provides the best balance:

  • Fast bootstrapping for initial deployment
  • Safety enforcement for subsequent releases
  • Automatic risk assessment based on deployment history

πŸ›  Getting Started

Prerequisites

  • Kubernetes cluster (v1.24+)
  • Docker and Docker Compose
  • kubectl configured with cluster access
  • PostgreSQL (included in Docker Compose)

Quick Setup

1. Clone the repository

git clone https://github.com/alloy-go/Alloy-core.git
cd Alloy-core

2. Configure environment

Create a .env file:

# Alloy Orchestrator Settings
APP_PORT=8080
APP_ENV=production

# Database Configuration
DB_HOST=postgres
DB_PORT=5432
DB_USER=alloy_admin
DB_PASSWORD=your_secure_password_here
DB_NAME=alloy_orchestrator

3. Deploy with Docker Compose

docker-compose up -d

4. Verify installation

# Check API health
curl http://localhost:8080/health

# Check database connection
docker-compose logs alloy-api

Docker Compose Configuration

version: '3.8'

services:
  alloy-api:
    image: alloy/orchestrator:latest
    ports:
      - "${APP_PORT}:8080"
    environment:
      - DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME}?sslmode=disable
    depends_on:
      - postgres
    restart: unless-stopped

  postgres:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ${DB_NAME}
    volumes:
      - alloy_storage:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  alloy_storage:

Why Alloy?

Deploying is mechanical β€” releasing is risky.

Alloy manages that risk through progressive delivery and automatic rollback.

Alloy vs other tools

Alloy vs. Direct kubectl apply

Problem with kubectl:

  • Blind updates with no health awareness
  • No automatic rollback capability
  • No traffic control
  • No deployment memory
  • Failure handling is completely manual

Alloy introduces decision-making on top of Kubernetes.


Alloy vs. Pure CI Pipelines

CI systems are designed to:

  • βœ… Build artifacts
  • βœ… Run tests
  • βœ… Push images

They are NOT designed to manage live production state.

CI tools lack:

  • ❌ Real-time Kubernetes awareness
  • ❌ Continuous observation loops
  • ❌ Progressive traffic control
  • ❌ Automated decision-making

Alloy separates concerns clearly:

  • CI β†’ Builds
  • Alloy β†’ Releases

Alloy is designed for CI-driven environments, where CI systems trigger deployments while Alloy controls the entire release process.

Alloy is Ideal For Teams That Need:

  • 🎯 CI-driven deployments environments
  • πŸ”Œ API-first control
  • 🧠 Centralized release logic
  • πŸ›‘οΈ Automatic recovery mechanisms
  • πŸ“Š Persistent deployment history

πŸ“Š Deployment Flow

graph TD
    A[CI Builds Container Image] --> B[CI Triggers Alloy Webhook]
    B --> C[Alloy Validates User & Project]
    C --> D[Deployment Strategy Resolved]
    D --> E[Kubernetes Manifests Applied]
    E --> F[Alloy Watches Live Cluster Metrics]
    F --> G[Traffic Adjusted Progressively]
    G --> H{Health Check}
    H -->|Success| I[Release Promoted to Stable]
    H -->|Failure| J[Automatic Rollback to Previous Version]
Loading

License

This project is licensed under the MIT License – see the LICENSE file for details.

About

Kubernetes native Continuous deployment Orchestrator

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors