Skip to content

blackboxprogramming/ai-chain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Chain

CI Python 3.10+ FastAPI Ollama Edge AI

Chained AI inference across distributed edge nodes. Routes prompts through a priority-ordered mesh of Ollama instances on Raspberry Pi 5 hardware.

Architecture

Client → /chain → Probe all nodes → Route to fastest alive node
                                   ↓ (mode=full)
                              Refine via second node

Nodes: Octavia (deepseek-r1), Aria (qwen2.5-coder), Lucidia (tinyllama), Alice (tinyllama fallback)

API

Endpoint Method Description
/health GET Node health check with model inventory
/chain POST Run chained inference ({"prompt": "...", "mode": "fast|full"})

Run

pip install -r requirements.txt
python server.py  # http://localhost:8100

Test

pip install pytest httpx
pytest tests/

Deploy

docker build -t ai-chain .
docker run -p 8100:8100 ai-chain

About

AI Chain — Distributed multi-node LLM inference with automatic failover. Chain Ollama models across Raspberry Pi fleet for load-balanced AI at the edge. FastAPI + Python.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors