Evo AI is an open-source platform for creating and managing AI agents, enabling integration with different AI models and services.
The Evo AI platform allows:
- Creation and management of AI agents
- Integration with different language models
- Client management
- MCP server configuration
- Custom tools management
- Google Agent Development Kit (ADK): Base framework for agent development, providing support for LLM Agents, Sequential Agents, Loop Agents, Parallel Agents and Custom Agents
- JWT authentication with email verification
- Agent 2 Agent (A2A) Protocol Support: Interoperability between AI agents following Google's A2A specification
- Workflow Agent with LangGraph: Building complex agent workflows with LangGraph and ReactFlow
- Secure API Key Management: Encrypted storage of API keys with Fernet encryption
- Agent Organization: Folder structure for organizing agents by categories
Evo AI supports different types of agents that can be flexibly combined to create complex solutions:
Agent based on language models like GPT-4, Claude, etc. Can be configured with tools, MCP servers, and sub-agents.
{
"client_id": "{{client_id}}",
"name": "personal_assistant",
"description": "Specialized personal assistant",
"role": "Personal Assistant",
"goal": "Help users with daily tasks and provide relevant information",
"type": "llm",
"model": "gpt-4",
"api_key_id": "stored-api-key-uuid",
"folder_id": "folder_id (optional)",
"instruction": "Detailed instructions for agent behavior",
"config": {
"tools": [
{
"id": "tool-uuid",
"envs": {
"API_KEY": "tool-api-key",
"ENDPOINT": "http://localhost:8000"
}
}
],
"mcp_servers": [
{
"id": "server-uuid",
"envs": {
"API_KEY": "server-api-key",
"ENDPOINT": "http://localhost:8001"
},
"tools": ["tool_name1", "tool_name2"]
}
],
"custom_tools": {
"http_tools": []
},
"sub_agents": ["sub-agent-uuid"]
}
}
Agent that implements Google's A2A protocol for agent interoperability.
{
"client_id": "{{client_id}}",
"type": "a2a",
"agent_card_url": "http://localhost:8001/api/v1/a2a/your-agent/.well-known/agent.json",
"folder_id": "folder_id (optional)",
"config": {
"sub_agents": ["sub-agent-uuid"]
}
}
Executes a sequence of sub-agents in a specific order.
{
"client_id": "{{client_id}}",
"name": "processing_flow",
"type": "sequential",
"folder_id": "folder_id (optional)",
"config": {
"sub_agents": ["agent-uuid-1", "agent-uuid-2", "agent-uuid-3"]
}
}
Executes multiple sub-agents simultaneously.
{
"client_id": "{{client_id}}",
"name": "parallel_processing",
"type": "parallel",
"folder_id": "folder_id (optional)",
"config": {
"sub_agents": ["agent-uuid-1", "agent-uuid-2"]
}
}
Executes sub-agents in a loop with a defined maximum number of iterations.
{
"client_id": "{{client_id}}",
"name": "loop_processing",
"type": "loop",
"folder_id": "folder_id (optional)",
"config": {
"sub_agents": ["sub-agent-uuid"],
"max_iterations": 5
}
}
Executes sub-agents in a custom workflow defined by a graph structure. This agent type uses LangGraph for implementing complex agent workflows with conditional execution paths.
{
"client_id": "{{client_id}}",
"name": "workflow_agent",
"type": "workflow",
"folder_id": "folder_id (optional)",
"config": {
"sub_agents": ["agent-uuid-1", "agent-uuid-2", "agent-uuid-3"],
"workflow": {
"nodes": [],
"edges": []
}
}
}
The workflow structure is built using ReactFlow in the frontend, allowing visual creation and editing of complex agent workflows with nodes (representing agents or decision points) and edges (representing flow connections).
Executes a specific task using a target agent. Task Agent provides a streamlined approach for structured task execution, where the agent_id specifies which agent will process the task, and the task description can include dynamic content placeholders.
{
"client_id": "{{client_id}}",
"name": "web_search_task",
"type": "task",
"folder_id": "folder_id (optional)",
"config": {
"tasks": [
{
"agent_id": "search-agent-uuid",
"description": "Search the web for information about {content}",
"expected_output": "Comprehensive search results with relevant information"
}
],
"sub_agents": ["post-processing-agent-uuid"]
}
}
Key features of Task Agent:
- Passes structured task instructions to the designated agent
- Supports variable content using {content} placeholder in the task description
- Provides clear task definition with instructions and expected output format
- Can execute sub-agents after the main task is completed
- Simplifies orchestration for single-focused task execution
Task Agent is ideal for scenarios where you need to execute a specific, well-defined task with clear instructions and expectations.
- All agent types can have sub-agents
- Sub-agents can be of any type
- Agents can be flexibly combined
- Type-specific configurations
- Support for custom tools and MCP servers
Agents can be integrated with MCP (Model Context Protocol) servers for distributed processing:
{
"config": {
"mcp_servers": [
{
"id": "server-uuid",
"envs": {
"API_KEY": "server-api-key",
"ENDPOINT": "http://localhost:8001",
"MODEL_NAME": "gpt-4",
"TEMPERATURE": 0.7,
"MAX_TOKENS": 2000
},
"tools": ["tool_name1", "tool_name2"]
}
]
}
}
Available configurations for MCP servers:
- id: Unique MCP server identifier
- envs: Environment variables for configuration
- API_KEY: Server authentication key
- ENDPOINT: MCP server URL
- MODEL_NAME: Model name to be used
- TEMPERATURE: Text generation temperature (0.0 to 1.0)
- MAX_TOKENS: Maximum token limit per request
- Other server-specific variables
- tools: MCP server tool names for agent use
Different types of agents can be combined to create complex processing flows:
{
"client_id": "{{client_id}}",
"name": "processing_pipeline",
"type": "sequential",
"config": {
"sub_agents": [
"llm-analysis-agent-uuid", // LLM Agent for initial analysis
"a2a-translation-agent-uuid", // A2A Agent for translation
"llm-formatting-agent-uuid" // LLM Agent for final formatting
]
}
}
{
"client_id": "{{client_id}}",
"name": "parallel_analysis",
"type": "sequential",
"config": {
"sub_agents": [
{
"type": "parallel",
"config": {
"sub_agents": [
"analysis-agent-uuid-1",
"analysis-agent-uuid-2",
"analysis-agent-uuid-3"
]
}
},
"aggregation-agent-uuid" // Agent for aggregating results
]
}
}
{
"client_id": "{{client_id}}",
"name": "conversation_system",
"type": "parallel",
"config": {
"sub_agents": [
{
"type": "llm",
"name": "context_agent",
"model": "gpt-4",
"instruction": "Maintain conversation context"
},
{
"type": "a2a",
"agent_card_url": "expert-agent-url"
},
{
"type": "loop",
"config": {
"sub_agents": ["memory-agent-uuid"],
"max_iterations": 1
}
}
]
}
}
For creating a new agent, use the endpoint:
POST /api/v1/agents
Content-Type: application/json
Authorization: Bearer your-token-jwt
{
// Configuration of the agent as per the examples above
}
- FastAPI: Web framework for building the API
- SQLAlchemy: ORM for database interaction
- PostgreSQL: Main database
- Alembic: Migration system
- Pydantic: Data validation and serialization
- Uvicorn: ASGI server
- Redis: Cache and session management
- JWT: Secure token authentication
- SendGrid/SMTP: Email service for notifications (configurable)
- Jinja2: Template engine for email rendering
- Bcrypt: Password hashing and security
- LangGraph: Framework for building stateful, multi-agent workflows
- ReactFlow: Library for building node-based visual workflows
Evo AI platform natively supports integration with Langfuse for detailed tracing of agent executions, prompts, model responses, and tool calls, using the OpenTelemetry (OTel) standard.
- Visual dashboard for agent traces, prompts, and executions
- Detailed analytics for debugging and evaluating LLM apps
- Easy integration with Google ADK and other frameworks
- Every agent execution (including streaming) is automatically traced via OpenTelemetry spans
- Data is sent to Langfuse, where it can be visualized and analyzed
-
Set environment variables in your
.env
:LANGFUSE_PUBLIC_KEY="pk-lf-..." # Your Langfuse public key LANGFUSE_SECRET_KEY="sk-lf-..." # Your Langfuse secret key OTEL_EXPORTER_OTLP_ENDPOINT="https://cloud.langfuse.com/api/public/otel" # (or us.cloud... for US region)
Attention: Do not swap the keys!
pk-...
is public,sk-...
is secret. -
Automatic initialization
- Tracing is automatically initialized when the application starts (
src/main.py
). - Agent execution functions are already instrumented with spans (
src/services/agent_runner.py
).
- Tracing is automatically initialized when the application starts (
-
View in the Langfuse dashboard
- Access your Langfuse dashboard to see real-time traces.
- 401 Error (Invalid credentials):
- Check if the keys are correct and not swapped in your
.env
. - Make sure the endpoint matches your region (EU or US).
- Check if the keys are correct and not swapped in your
- Context error in async generator:
- The code is already adjusted to avoid OpenTelemetry context issues in async generators.
- Questions about integration:
Evo AI implements the Google's Agent 2 Agent (A2A) protocol, enabling seamless communication and interoperability between AI agents. This implementation includes:
- Standardized Communication: Agents can communicate using a common protocol regardless of their underlying implementation
- Interoperability: Support for agents built with different frameworks and technologies
- Well-Known Endpoints: Standardized endpoints for agent discovery and interaction
- Task Management: Support for task creation, execution, and status tracking
- State Management: Tracking of agent states and conversation history
- Authentication: Secure API key-based authentication for agent interactions
- Agent Card: Each agent exposes a
.well-known/agent.json
endpoint with its capabilities and configuration - Task Handling: Support for task creation, execution, and status tracking
- Message Format: Standardized message format for agent communication
- History Tracking: Maintains conversation history between agents
- Artifact Management: Support for handling different types of artifacts (text, files, etc.)
// Agent Card Example
{
"name": "My Agent",
"description": "A helpful AI assistant",
"url": "https://api.example.com/agents/123",
"capabilities": {
"streaming": false,
"pushNotifications": false,
"stateTransitionHistory": true
},
"authentication": {
"schemes": ["apiKey"],
"credentials": {
"in": "header",
"name": "x-api-key"
}
},
"skills": [
{
"id": "search",
"name": "Web Search",
"description": "Search the web for information"
}
]
}
For more information about the A2A protocol, visit Google's A2A Protocol Documentation.
src/
βββ api/ # API endpoints
βββ core/ # Core business logic
βββ models/ # Data models
βββ schemas/ # Pydantic schemas for validation
βββ services/ # Business services
βββ templates/ # Email templates
β βββ emails/ # Jinja2 email templates
βββ utils/ # Utilities
βββ config/ # Configurations
Before starting, make sure you have the following installed:
- Python: 3.10 or higher
- PostgreSQL: 13.0 or higher
- Redis: 6.0 or higher
- Git: For version control
- Make: For running Makefile commands (usually pre-installed on Linux/Mac, for Windows use WSL or install via chocolatey)
You'll also need the following accounts/API keys:
- SendGrid Account: For email functionality
- Python 3.10+
- PostgreSQL
- Redis
- Email provider:
- SendGrid Account (if using SendGrid email provider)
- SMTP Server (if using SMTP email provider)
- Clone the repository:
git clone https://github.com/EvolutionAPI/evo-ai.git
cd evo-ai
- Create a virtual environment:
make venv
source venv/bin/activate # Linux/Mac
# or
venv\Scripts\activate # Windows
- Install dependencies:
pip install -e . # For basic installation
# or
pip install -e ".[dev]" # For development dependencies
Or using the Makefile:
make install # For basic installation
# or
make install-dev # For development dependencies
- Set up environment variables:
cp .env.example .env
# Edit the .env file with your settings
- Initialize the database and run migrations:
make alembic-upgrade
- Seed the database with initial data:
make seed-all
After installing Evo AI (the backend), you need to install the frontend to access the web interface:
- Clone the frontend repository:
git clone https://github.com/EvolutionAPI/evo-ai-frontend.git
cd evo-ai-frontend
- Follow the installation instructions in the frontend repository's README to set up and run the web interface.
The backend (API) and frontend are separate projects. Make sure both are running for full platform functionality.
After installation, follow these steps to set up your first agent:
- Configure MCP Server: Set up your Model Context Protocol server configuration first
- Create Client or Register: Create a new client or register a user account
- Create Agents: Set up the agents according to your needs (LLM, A2A, Sequential, Parallel, Loop, or Workflow)
Configure your environment using the following key settings:
# Database settings
POSTGRES_CONNECTION_STRING="postgresql://postgres:root@localhost:5432/evo_ai"
# Redis settings
REDIS_HOST="localhost"
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD="your-redis-password"
# JWT settings
JWT_SECRET_KEY="your-jwt-secret-key"
JWT_ALGORITHM="HS256"
JWT_EXPIRATION_TIME=30 # In seconds
# Email provider configuration
EMAIL_PROVIDER="sendgrid" # Options: "sendgrid" or "smtp"
# SendGrid (if EMAIL_PROVIDER=sendgrid)
SENDGRID_API_KEY="your-sendgrid-api-key"
EMAIL_FROM="noreply@yourdomain.com"
APP_URL="https://yourdomain.com"
# SMTP (if EMAIL_PROVIDER=smtp)
SMTP_FROM="noreply-smtp@yourdomain.com"
SMTP_USER="your-smtp-username"
SMTP_PASSWORD="your-smtp-password"
SMTP_HOST="your-smtp-host"
SMTP_PORT=587
SMTP_USE_TLS=true
SMTP_USE_SSL=false
# Encryption for API keys
ENCRYPTION_KEY="your-encryption-key"
The project uses modern Python packaging standards with pyproject.toml
. Key dependencies include:
dependencies = [
"fastapi==0.115.12",
"uvicorn==0.34.2",
"pydantic==2.11.3",
"sqlalchemy==2.0.40",
"psycopg2==2.9.10",
"alembic==1.15.2",
"redis==5.3.0",
"langgraph==0.4.1",
# ... other dependencies
]
For development, additional packages can be installed with:
pip install -e ".[dev]"
This includes development tools like black, flake8, pytest, and more.
The API uses JWT (JSON Web Token) authentication. To access the endpoints, you need to:
- Register a user or log in to obtain a JWT token
- Include the JWT token in the
Authorization
header of all requests in the formatBearer <token>
- Tokens expire after a configured period (default: 30 minutes)
- User Registration:
POST /api/v1/auth/register
-
Email Verification: An email will be sent containing a verification link.
-
Login:
POST /api/v1/auth/login
Returns a JWT token to be used in requests.
- Password Recovery (if needed):
POST /api/v1/auth/forgot-password
POST /api/v1/auth/reset-password
- Recover logged user data:
POST /api/v1/auth/me
# Login
curl -X POST "http://localhost:8000/api/v1/auth/login" \
-H "Content-Type: application/json" \
-d '{"email": "your-email@example.com", "password": "your-password"}'
# Use received token
curl -X GET "http://localhost:8000/api/v1/clients/" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
- Regular users (associated with a client) only have access to their client's resources
- Admin users have access to all resources
- Certain operations (such as creating MCP servers) are restricted to administrators only
- Account lockout mechanism after multiple failed login attempts for enhanced security
The platform uses Jinja2 templates for email rendering with a unified design system:
- Base Template: All emails extend a common base template for consistent styling
- Verification Email: Sent when users register to verify their email address
- Password Reset: Sent when users request a password reset
- Welcome Email: Sent after email verification to guide new users
- Account Locked: Security alert when an account is locked due to multiple failed login attempts
All email templates feature responsive design, clear call-to-action buttons, and fallback mechanisms.
make run # For development with automatic reload
# or
make run-prod # For production with multiple workers
The API will be available at http://localhost:8000
# Database migrations
make init # Initialize Alembic
make alembic-revision message="description" # Create new migration
make alembic-upgrade # Update database to latest version (use to execute existing migrations)
make alembic-downgrade # Revert latest migration
make alembic-migrate message="description" # Create and apply migration
make alembic-reset # Reset database
# Seeders
make seed-admin # Create default admin
make seed-client # Create default client
make seed-mcp-servers # Create example MCP servers
make seed-tools # Create example tools
make seed-all # Run all seeders
# Code verification
make lint # Verify code with flake8
make format # Format code with black
make clear-cache # Clear project cache
For quick setup and deployment, we provide Docker and Docker Compose configurations.
- Docker installed
- Docker Compose installed
- Create and configure the
.env
file:
cp .env.example .env
# Edit the .env file with your settings, especially:
# - POSTGRES_CONNECTION_STRING
# - REDIS_HOST (should be "redis" when using Docker)
# - JWT_SECRET_KEY
# - SENDGRID_API_KEY
- Build the Docker image:
make docker-build
- Start the services (API, PostgreSQL, and Redis):
make docker-up
- Apply migrations (first time only):
docker-compose exec api python -m alembic upgrade head
- Populate the database with initial data:
make docker-seed
- To check application logs:
make docker-logs
- To stop the services:
make docker-down
- API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- PostgreSQL: localhost:5432
- Redis: localhost:6379
Docker Compose sets up persistent volumes for:
- PostgreSQL data
- Redis data
- Application logs directory
The main environment variables used by the API container:
POSTGRES_CONNECTION_STRING
: PostgreSQL connection stringREDIS_HOST
: Redis host (use "redis" when running with Docker)JWT_SECRET_KEY
: Secret key for JWT token generationEMAIL_PROVIDER
: Email provider to use ("sendgrid" or "smtp")SENDGRID_API_KEY
: SendGrid API key (if using SendGrid)EMAIL_FROM
: Email used as sender (for SendGrid)SMTP_FROM
: Email used as sender (for SMTP)SMTP_HOST
,SMTP_PORT
,SMTP_USER
,SMTP_PASSWORD
: SMTP server configurationSMTP_USE_TLS
,SMTP_USE_SSL
: SMTP security settingsAPP_URL
: Base URL of the application
Evo AI implements a secure API key management system that protects sensitive credentials:
- Encrypted Storage: API keys are encrypted using Fernet symmetric encryption before storage
- Secure References: Agents reference API keys by UUID (api_key_id) instead of storing raw keys
- Centralized Management: API keys can be created, updated, and rotated without changing agent configurations
- Client Isolation: API keys are scoped to specific clients for better security isolation
The encryption system uses a secure key defined in the .env
file:
ENCRYPTION_KEY="your-secure-encryption-key"
If not provided, a secure key will be generated automatically at startup.
API keys can be managed through dedicated endpoints:
# Create a new API key
POST /api/v1/agents/apikeys
Content-Type: application/json
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
{
"client_id": "client-uuid",
"name": "My OpenAI Key",
"provider": "openai",
"key_value": "sk-actual-api-key-value"
}
# List all API keys for a client
GET /api/v1/agents/apikeys
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
# Get a specific API key
GET /api/v1/agents/apikeys/{key_id}
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
# Update an API key
PUT /api/v1/agents/apikeys/{key_id}
Content-Type: application/json
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
{
"name": "Updated Key Name",
"provider": "anthropic",
"key_value": "new-key-value",
"is_active": true
}
# Delete an API key (soft delete)
DELETE /api/v1/agents/apikeys/{key_id}
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
Agents can be organized into folders for better management:
# Create a new folder
POST /api/v1/agents/folders
Content-Type: application/json
Authorization: Bearer your-token-jwt
{
"client_id": "client-uuid",
"name": "Marketing Agents",
"description": "Agents for content marketing tasks"
}
# List all folders
GET /api/v1/agents/folders
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
# Get a specific folder
GET /api/v1/agents/folders/{folder_id}
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
# Update a folder
PUT /api/v1/agents/folders/{folder_id}
Content-Type: application/json
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
{
"name": "Updated Folder Name",
"description": "Updated folder description"
}
# Delete a folder
DELETE /api/v1/agents/folders/{folder_id}
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
# List agents in a folder
GET /api/v1/agents/folders/{folder_id}/agents
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
# Assign an agent to a folder
PUT /api/v1/agents/{agent_id}/folder
Content-Type: application/json
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
{
"folder_id": "folder-uuid"
}
# Remove an agent from any folder
PUT /api/v1/agents/{agent_id}/folder
Content-Type: application/json
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
{
"folder_id": null
}
When listing agents, you can filter by folder:
GET /api/v1/agents?folder_id=folder-uuid
Authorization: Bearer your-token-jwt
x-client-id: client-uuid
The interactive API documentation is available at:
- Swagger UI:
http://localhost:8000/docs
- ReDoc:
http://localhost:8000/redoc
- Logs are stored in the
logs/
directory with the following format:{logger_name}_{date}.log
- The system maintains audit logs for important administrative actions
- Each action is recorded with information such as user, IP, date/time, and details
We welcome contributions from the community! Here's how you can help:
- Fork the project
- Create a feature branch (
git checkout -b feature/AmazingFeature
) - Make your changes and add tests if possible
- Run tests and make sure they pass
- Commit your changes following conventional commits format (
feat: add amazing feature
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Please read our Contributing Guidelines for more details.
This project is licensed under the Apache License 2.0.
The use of the name, logo, or trademark "Evolution API" is protected and not automatically granted by the license. See section 6 (Trademarks) of the license for details about trademark usage.