A modern web-based multi-agent system built with Flask and Ollama. Create, manage, and interact with multiple AI agents that can communicate with each other, execute tasks, and maintain persistent memory through a SQLite knowledge base.
Get up and running in 5 minutes:
# 1. Install Ollama (if not already installed)
# Visit https://ollama.ai and follow installation instructions
# 2. Download a model
ollama pull llama3.2
# 3. Install Python dependencies
pip install -r requirements.txt
# 4. Initialize the database
python init_db.py
# 5. (Optional) Add sample agents
python seed_db.py
# 6. Start the application
python app.pyThen open your browser to http://localhost:5001
- 🤖 Multiple Agents: Create and manage multiple AI agents with unique personalities and capabilities
- 💬 Web Interface: Beautiful, modern web UI built with Tailwind CSS and shadcn-inspired components
- 📡 Agent Communication: Agents can send messages to each other and have multi-round conversations
- 💾 Persistent Storage: SQLite database stores agents and all message history
- 📚 Knowledge Base: Shared knowledge base tracks all interactions, tasks, and file operations
- 🔧 File Operations: Agents can read, write, and list files/directories
- 🔒 Tool Access Control: Fine-grained control over which tools each agent can use (read-only, write-only, or full access)
- 🎯 Task Execution: Agents can execute complex tasks with context awareness
- 🌐 Real-time Updates: WebSocket support for real-time communication
The Multi-Agent System operates through a layered architecture that enables AI agents to work independently and collaboratively:
- Agent Creation: Create agents through the web interface with custom personalities, models, and settings
- Agent Manager: Manages the lifecycle of all agents, persisting them to SQLite database
- Message Bus: Routes messages between agents, enabling direct communication and multi-round conversations
- Knowledge Base: Every interaction (chat, task, file operation, agent message) is stored for context and history
- Ollama Integration: Each agent uses local Ollama models for AI-powered responses
graph TB
User[👤 User] -->|HTTP/WebSocket| WebUI[🌐 Web Interface]
WebUI -->|REST API| Flask[⚙️ Flask App]
Flask -->|Create/Delete| AgentMgr[📋 Agent Manager]
Flask -->|Chat/Task| Agent[🤖 Agent]
Flask -->|Route Message| MsgBus[📨 Message Bus]
AgentMgr -->|Load/Save| DB[(🗄️ SQLite DB)]
Agent -->|Generate Response| Ollama[🧠 Ollama API]
Agent -->|Store Interaction| KB[📚 Knowledge Base]
MsgBus -->|Store Message| KB
KB -->|Persist| DB
Agent -->|Read Context| KB
MsgBus -->|Deliver| Agent
style User fill:#e1f5ff
style WebUI fill:#fff4e1
style Flask fill:#ffe1f5
style AgentMgr fill:#f5e1ff
style Agent fill:#e1ffe1
style Ollama fill:#ffe1e1
style KB fill:#fff4e1
style DB fill:#e1e1ff
style MsgBus fill:#ffe1cc
- Web Interface (
templates/,static/): User-facing pages for managing agents, chatting, viewing knowledge - Flask App (
app.py): REST API and WebSocket server handling all HTTP requests - Agent Manager (
agent_manager.py): Manages agent lifecycle, registration, and persistence - Enhanced Agent (
agent_core.py): Individual agent with chat, task execution, file ops, and messaging capabilities - Message Bus (
message_bus.py): Routes messages between agents and stores them in knowledge base - Knowledge Base (
knowledge_base.py): SQLite wrapper for storing all interactions and providing context - Conversation Orchestrator (
conversation_orchestrator.py): Manages multi-agent conversations with intelligent routing
- Direct Chat: Chat with individual agents for help, advice, or task execution
- Agent Collaboration: Multiple agents work together on complex objectives
- Task Automation: Agents execute tasks like code generation, file operations, analysis
- Knowledge Accumulation: Every interaction builds an agent's knowledge for better future responses
- Python 3.7+: Ensure Python is installed on your system
- Ollama: Install Ollama from https://ollama.ai
- Local Model: Download a model using Ollama (e.g.,
ollama pull llama3.2)
git clone <repository-url>
cd agentpip install -r requirements.txt# Install Ollama (if not already installed)
# Visit https://ollama.ai for installation instructions
# Download a model (e.g., llama3.2)
ollama pull llama3.2
# Verify the model is available
ollama list# Create the database and tables
python init_db.py
# (Optional) Seed the database with sample agents
python seed_db.pyThe system uses SQLite to store agents and message history. Initialize the database before first use:
# Create database and tables
python init_db.py
# Options:
# --db <path> Custom database path (default: data/agent.db)
# --reset Drop existing tables and recreate (WARNING: deletes all data!)Add sample agents to get started quickly:
# Add sample agents (skips existing ones)
python seed_db.py
# Options:
# --db <path> Custom database path
# --overwrite Replace all existing agentsThe seed script creates 3 sample agents:
- Designer: Creative UI/UX designer with expertise in creating beautiful, functional, and user-friendly interfaces
- Coder: Expert software developer who writes clean, efficient code (builds files in the
agent_code/folder) - Tester: Quality assurance engineer with expertise in testing methodologies and bug identification
Clear agents and/or interactions from the database:
# Clear all agents (default)
python clear_db.py
# Clear both agents and interactions
python clear_db.py --all
# Clear only interactions
python clear_db.py --interactions
# Skip confirmation prompt
python clear_db.py --yes
# Options:
# --db <path> Custom database path
# --agents Clear agents (default)
# --interactions Clear interactions/messages
# --all Clear both agents and interactions
# --yes Skip confirmation promptStart the Flask web server:
python app.pyThe application will be available at:
- Web Interface: http://localhost:5000
- API: http://localhost:5000/api
┌─────────────────────────────────────────────────────────┐
│ Web Interface (Flask) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌─────────┐│
│ │ Dashboard│ │ Chat │ │ Agent │ │Knowledge ││
│ │ │ │ │ │ Comm │ │ Base ││
│ └──────────┘ └──────────┘ └──────────┘ └─────────┘│
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Agent Manager │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Manages agent lifecycle (create, delete, list) │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
│
┌─────────────────┴─────────────────┐
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Agents │ │ Message Bus │
│ │ │ │
│ - Enhanced │◄─────────────────┤ - Routes │
│ Agent │ │ messages │
│ - Ollama │ │ - Registers │
│ Client │ │ agents │
└──────────────┘ └──────────────┘
│ │
└─────────────────┬─────────────────┘
▼
┌─────────────────┐
│ Knowledge Base │
│ │
│ - SQLite DB │
│ - Interactions │
│ - Agent Config │
└─────────────────┘
- Manages agent lifecycle (create, delete, list)
- Loads agents from SQLite on startup
- Persists agents to database
- Coordinates with Knowledge Base and Message Bus
- Core agent implementation with Ollama integration
- Handles chat, task execution, and file operations
- Maintains conversation history
- Integrates with knowledge base for context
- Routes messages between agents
- Registers agents for communication
- Stores messages in knowledge base
- SQLite database management
- Stores agent configurations
- Tracks all interactions (chats, tasks, file ops)
- Provides search and filtering capabilities
- Flask REST API endpoints
- WebSocket support for real-time updates
- Serves web interface templates
CREATE TABLE agents (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL,
model TEXT NOT NULL,
system_prompt TEXT,
settings TEXT, -- JSON
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);CREATE TABLE knowledge_base (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp TEXT NOT NULL,
agent_name TEXT NOT NULL,
interaction_type TEXT NOT NULL, -- user_chat, agent_chat, task_execution, etc.
content TEXT NOT NULL,
metadata TEXT, -- JSON
related_agent TEXT
);- View all agents
- Create new agents
- Quick access to other features
- View statistics (agent count, message count)
- Chat directly with an agent
- Execute tasks
- Perform file operations
- View conversation history
- Send single messages between agents
- Start multi-round conversations
- View message history
- Monitor conversation progress
- Browse all interactions
- Filter by agent or interaction type
- Search interactions
- View metadata
- Click "Create Agent" on the dashboard
- Fill in the form:
- Name: Unique agent identifier
- Model: Ollama model name (e.g.,
llama3.2) - System Prompt: Defines agent personality/behavior
- Temperature: 0.0-2.0 (creativity level)
- Max Tokens: Maximum response length
- Tool Access: Select which tools the agent can use (checkboxes):
- ✅ write_file - Write content to files
- ✅ read_file - Read file contents
- ✅ list_directory - List directory contents
- Click "Create Agent"
The agent is saved to the database and available immediately.
- Go to Agent Communication page
- Select sender and receiver agents
- Type your message
- Click "Send Message"
- Select two agents
- Enter initial message
- Set number of rounds
- Click "Start Conversation"
Agents will alternate responding to each other for the specified number of rounds.
GET /api/agents- List all agentsPOST /api/agents- Create a new agentDELETE /api/agents/<name>- Delete an agentGET /api/agents/<name>/chat- Get chat historyPOST /api/agents/<name>/chat- Send chat messagePOST /api/agents/<name>/tasks/execute- Execute a task
POST /api/agents/<sender>/message/<receiver>- Send message between agentsPOST /api/agents/<agent1>/conversation/<agent2>- Start multi-round conversation
GET /api/knowledge- Query interactions- Query params:
agent_name,interaction_type,search,limit,offset
- Query params:
GET /api/health- System health and status
When creating an agent, you can configure:
- Model: Ollama model name (must be downloaded locally)
- System Prompt: Defines agent behavior and personality
- Temperature: 0.0-2.0 (lower = more focused, higher = more creative)
- Max Tokens: Maximum tokens in responses (100-8192)
- API Endpoint: Ollama API endpoint (default: http://localhost:11434)
- Tools: List of allowed tools (optional, defaults to all tools)
Control which tools each agent can access for enhanced security and specialization:
- write_file: Write content to files in the agent_code directory
- read_file: Read file contents from the filesystem
- list_directory: List contents of directories
# Create a read-only agent
agent_manager.create_agent(
name="code_reviewer",
model="llama3.2",
system_prompt="You review code but never modify it.",
tools=["read_file", "list_directory"]
)
# Create a write-only agent
agent_manager.create_agent(
name="doc_generator",
model="llama3.2",
system_prompt="You generate documentation files.",
tools=["write_file"]
)
# Create a full-access agent (default)
agent_manager.create_agent(
name="developer",
model="llama3.2",
system_prompt="You are a full-stack developer.",
tools=None # or ["write_file", "read_file", "list_directory"]
)# Create read-only agent
curl -X POST http://localhost:5001/api/agents \
-H "Content-Type: application/json" \
-d '{
"name": "reader",
"model": "llama3.2",
"tools": ["read_file", "list_directory"]
}'
# Get available tools
curl http://localhost:5001/api/toolsSee docs/TOOL_ACCESS_CONTROL.md for detailed documentation.
The database path can be configured in knowledge_base.py:
knowledge_base = KnowledgeBase(db_path="data/agent.db")agent/
├── app.py # Flask web application
├── agent_core.py # Core agent implementation
├── agent_manager.py # Agent lifecycle management
├── knowledge_base.py # SQLite database management
├── message_bus.py # Agent-to-agent messaging
├── init_db.py # Database initialization script
├── seed_db.py # Database seeding script
├── requirements.txt # Python dependencies
├── config.yaml # Example configuration
├── data/ # Database directory (gitignored)
│ └── agent.db # SQLite database
├── static/ # Static web assets
│ ├── css/
│ └── js/
└── templates/ # HTML templates
├── index.html
├── chat.html
├── agent_comm.html
└── knowledge.html
# The app runs in debug mode by default
python app.py
# Or use Flask's development server
export FLASK_APP=app.py
export FLASK_ENV=development
flask run# Initialize database
python init_db.py
# Seed with sample agents
python seed_db.py
# Reset database (WARNING: deletes all data)
python init_db.py --resetDatabase not found:
python init_db.pyAgents not loading:
- Check database exists:
ls data/agent.db - Verify tables exist: Run
init_db.py - Check database permissions
Model not found:
# Download the model
ollama pull llama3.2
# Verify it's available
ollama listConnection refused:
# Start Ollama (usually runs automatically)
ollama serve
# Test connection
curl http://localhost:11434/api/tagsPort already in use:
- Change port in
app.py:socketio.run(app, port=5001) - Or kill the process using port 5000
Agents not appearing:
- Refresh the page
- Check browser console for errors
- Verify database has agents:
python seed_db.py
# Reinstall dependencies
pip install -r requirements.txt
# Or use a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txtYou can use any model available in Ollama. Popular options:
llama3.2- Meta's Llama 3.2 (recommended)llama3- Meta's Llama 3mistral- Mistral AI modelcodellama- Code-focused Llama variantphi3- Microsoft's Phi-3
Download models:
ollama pull <model-name>This project is provided as-is for local use.
Contributions are welcome! Please feel free to submit issues or pull requests.