A powerful multi-agent system built with the Agno framework, featuring specialized agents for different tasks including web search, financial analysis, code execution, and system administration.
- 🤖 Multiple Specialized Agents: General, Search, Finance, Code, and System agents
- 🔧 Agno Framework: Built on the modern Agno Agent framework
- 🦙 Ollama Integration: Local LLM inference with multiple models (Mistral, Llama3.2, Phi3)
- 🌐 FastAPI Backend: RESTful API for agent interactions
- 🎨 Streamlit UI: Beautiful web interface for chat interactions
- 👥 Team Collaboration: Multi-agent teams with coordination capabilities
- 🏗️ Extensible Architecture: Easy to add new agents and tools
# Full setup with dependencies and models
make setup
# Or manually
python -m scripts.setup# Install dependencies
pip install -e .
# Start Ollama (if not running)
ollama serve
# Download required models
ollama pull mistral:latest
ollama pull llama3.2:3b
ollama pull phi3:mini# Start both API and UI
make run
# or
python -m scripts.run both
# Start only API
make run-api
# Start only UI
make run-ui- Streamlit UI: http://localhost:8501
- API Documentation: http://localhost:8000/docs
- API Health Check: http://localhost:8000/health
The system follows the official Agno Agent App architecture:
local-agent/
├── agents/ # Individual agent definitions
│ ├── general.py # General conversation agent
│ ├── search.py # Web search with DuckDuckGo
│ ├── finance.py # Financial analysis with YFinance
│ ├── code.py # Python code execution
│ ├── system.py # System administration
│ └── settings.py # Model configuration
├── api/ # FastAPI backend
│ ├── main.py # API application
│ ├── agents.py # Agent endpoints
│ └── teams.py # Team collaboration endpoints
├── ui/ # Streamlit frontend
│ └── agent_chat.py # Chat interface
├── teams/ # Multi-agent teams
│ └── research.py # Collaborative research team
├── utils/ # Shared utilities
│ ├── logging_config.py
│ └── model_utils.py
├── workspace/ # Agno workspace configuration
│ ├── settings.py # Workspace settings
│ └── dev_resources.py
└── scripts/ # Helper scripts
├── setup.py # Automatic setup
└── run.py # Service runner
- Natural conversation handling
- General assistance and information
- Model: Configurable (default: mistral:latest)
- Web search using DuckDuckGo
- Real-time information retrieval
- Tools: DuckDuckGo search
- Model: llama3.2:3b
- Stock price analysis
- Financial data retrieval
- Market information
- Tools: YFinance
- Model: mistral:latest
- Python code execution
- Mathematical calculations
- Programming assistance
- Tools: Python REPL, Calculator
- Model: llama3.2:latest
- Shell command execution
- System administration
- File management
- Tools: Shell execution
- Model: phi3:mini
- Collaborative multi-agent team
- Combines search, analysis, and synthesis
- Members: Search + Finance + General agents
- Mode: Coordinate for collaborative work
GET /health- Health checkGET /- Root endpoint
GET /agents- List all agentsPOST /agents/{agent_id}/chat- Chat with specific agent
GET /teams- List all teamsPOST /teams/{team_id}/chat- Chat with agent team
Edit agents/settings.py to configure models:
MODEL_CONFIG = {
"general": "mistral:latest",
"search": "llama3.2:3b",
"finance": "mistral:latest",
"code": "llama3.2:latest",
"system": "phi3:mini"
}Edit workspace/settings.py for workspace configuration:
ws_settings = WorkspaceSettings(
ws_name="local-agent-agno",
api_port=8000,
streamlit_port=8501,
# ... other settings
)# Setup and installation
make setup # Full setup
make install # Install dependencies only
# Running services
make run # Start both API and UI
make run-api # Start API only
make run-ui # Start UI only
# Development
make test # Run tests
make lint # Run linter
make format # Format code
make clean # Clean temporary files
# Ollama management
make ollama-status # Check Ollama status
make ollama-models # List available models- Agno: Modern agent framework
- Ollama: Local LLM inference
- FastAPI: Web API framework
- Streamlit: Web UI framework
- DuckDuckGo: Web search
- YFinance: Financial data
- Rich: Terminal formatting
- Uvicorn: ASGI server
- Pytest: Testing framework
- Ruff: Code linting and formatting
- Create Agent File: Add new agent in
agents/ - Define Tools: Add required tools to the agent
- Update Settings: Add model configuration
- Register in API: Add endpoints in
api/agents.py - Update UI: Add to agent selection in UI
Example agent structure:
from agno import Agent
from agno.models.ollama import OllamaChat
agent = Agent(
name="MyAgent",
model=OllamaChat(id="mistral:latest"),
description="Agent description",
instructions="Detailed instructions",
tools=[my_tool_function]
)- Open http://localhost:8501
- Select an agent or team
- Start chatting!
# Chat with search agent
curl -X POST "http://localhost:8000/agents/search/chat" \
-H "Content-Type: application/json" \
-d '{"message": "Search for latest AI news"}'
# Chat with research team
curl -X POST "http://localhost:8000/teams/research/chat" \
-H "Content-Type: application/json" \
-d '{"message": "Research the current state of renewable energy"}'For production deployment with the full Agno stack:
- Setup PostgreSQL database
- Configure environment variables
- Update workspace settings
- Deploy with your preferred method (Docker, Kubernetes, etc.)
The system is designed to work seamlessly with the Agno framework's production features including database persistence, user management, and scalable deployment.
# Créer un nouveau dossier pour le frontend
mkdir -p ui/web
cd ui/web
# Initialiser un nouveau projet Next.js avec TypeScript
npx create-next-app@latest . --typescript --tailwind --eslint --app --src-dir --import-alias "@/*"
# Installer les dépendances de base
npm install @radix-ui/react-slot class-variance-authority clsx tailwind-merge lucide-react# Installer shadcn/ui CLI
npx shadcn-ui@latest init
# Répondre aux questions :
# - Style: Default
# - Base color: Slate
# - CSS variables: Yes
# - React Server Components: Yes
# - Tailwind CSS class sorting: Yes
# - Layout: Yes
# - Components directory: @/components
# - Utils directory: @/lib/utils
# - Include example components: No# Installer les composants essentiels
npx shadcn@latest add button
npx shadcn@latest add card
npx shadcn@latest add dialog
npx shadcn@latest add input
npx shadcn@latest add textarea
npx shadcn@latest add sonnerui/web/
├── app/
│ ├── layout.tsx # Layout principal
│ ├── page.tsx # Page d'accueil
│ └── globals.css # Styles globaux
├── components/
│ ├── ui/ # Composants shadcn/ui
│ └── chat/ # Composants spécifiques au chat
├── lib/
│ └── utils.ts # Utilitaires
└── public/ # Assets statiques
// app/api/chat/route.ts
import { NextResponse } from "next/server"
export async function POST(req: Request) {
try {
const { message } = await req.json()
// TODO: Intégrer avec votre backend agno
return NextResponse.json({ message: "Réponse de l'agent" })
} catch (error) {
return NextResponse.json(
{ error: "Erreur de traitement" },
{ status: 500 }
)
}
}- Créer un service d'API dans le backend pour communiquer avec agno
- Configurer les routes API dans Next.js pour appeler ce service
- Gérer l'authentification et les sessions si nécessaire
# Lancer le serveur de développement
npm run dev
# Construire pour la production
npm run build
# Lancer en production
npm start- Configurer les variables d'environnement
- Construire l'application
- Déployer sur votre plateforme préférée (Vercel, Netlify, etc.)
- Ajouter l'authentification
- Implémenter le chat en temps réel
- Ajouter des animations
- Optimiser les performances
- Ajouter des tests
- Configurer le CI/CD