A modular AI agent system with comprehensive metrics tracking, multi-provider LLM support, and intelligent task orchestration.
- π€ Multi-Agent Orchestration: Intelligent task breakdown and execution
- π Real-Time Metrics: Actual token usage, costs, and performance tracking
- π Multi-Provider Support: XAI (Grok), OpenAI, Anthropic, Ollama, Z.ai
- β‘ Async Architecture: Concurrent task execution with proper resource management
- π― Role-Based Models: Different AI models optimized for planning, execution, and review
- π Comprehensive Analytics: Session-based metrics with cost optimization insights
- π‘οΈ Production Ready: Comprehensive error handling and connection validation
# Clone the repository
git clone <repository-url>
cd ReasonLoop
# Install dependencies
pip install -r requirements.txt
# Or use poetry (recommended)
poetry installCreate your .env file:
cp .env.example .env
# Edit .env with your settings# LLM Provider
LLM_PROVIDER=xai
# API Configuration
XAI_API_KEY=your-xai-api-key
XAI_MODEL=grok-4-1-fast-non-reasoning
# Role-based models (automatically selected based on task type)
XAI_MODEL_ORCHESTRATOR=grok-4-1-fast-non-reasoning
XAI_MODEL_PLANNER=grok-4-1-fast-non-reasoning
XAI_MODEL_EXECUTOR=grok-4-1-fast-non-reasoning
XAI_MODEL_REVIEWER=grok-4-1-fast-non-reasoning
# Basic settings
LLM_TEMPERATURE=0.7
LLM_MAX_TOKENS=4096OpenAI Configuration
LLM_PROVIDER=openai
OPENAI_API_KEY=your-api-key
OPENAI_MODEL=gpt-4Anthropic Configuration
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-api-key
ANTHROPIC_MODEL=claude-sonnet-3.5Ollama Configuration
LLM_PROVIDER=ollama
OLLAMA_API_URL=http://localhost:11434/api/generate
OLLAMA_MODEL=llama3# List available abilities
python main.py --list-abilities
# Test with simple objective
python main.py --objective "What is Python?" --verbose# Simple task execution
python main.py --objective "Explain quantum computing in simple terms"
# Research and analysis
python main.py --objective "Research the latest trends in artificial intelligence and create a comprehensive report"
# Creative content generation
python main.py --objective "Write a marketing strategy for a sustainable fashion startup"
# Technical documentation
python main.py --objective "Create a technical specification for a REST API"# Use specific template
python main.py --template default_tasks --objective "Analyze competitor pricing strategies"
# Verbose logging for debugging
python main.py --objective "Create a business plan" --verbose
# Custom model selection
python main.py --objective "Debug this Python code" --model gpt-4-turboAvailable Templates
- default_tasks: General-purpose task execution
- marketing_insights: Marketing analysis and strategy
- revenue_optimization: Revenue-focused analysis
- ecommerce_metrics: E-commerce performance analysis
- autonomous_ecommerce_growth: Comprehensive e-commerce growth strategy
ReasonLoop provides comprehensive metrics tracking:
- Token Usage: Actual tokens consumed from LLM APIs
- Cost Tracking: Real USD costs from provider APIs
- Performance: Tokens per second, response times
- Provider Efficiency: Multi-provider comparison
{
"usage": {
"prompt_tokens": 407,
"completion_tokens": 1173,
"total_tokens": 1580,
"model": "grok-4-1-fast-non-reasoning",
"provider": "xai",
"cost_usd": 0.64375,
"usage_source": "api"
}
}- Prompt Logs:
logs/prompts/YYYYMMDD_HHMMSS_ability_taskN_prompt.json - Session Metrics:
logs/metrics/session_ID_timestamp.json - Application Logs:
logs/reasonloop_timestamp.log
ReasonLoop/
βββ abilities/ # Individual AI capabilities
β βββ text_completion.py # LLM integration with metrics
β βββ web_search.py # Web search functionality
β βββ web_scrape.py # Content extraction
β βββ ...
βββ core/ # Execution engine
β βββ execution_loop.py # Main orchestration logic
β βββ task_manager.py # Task breakdown and execution
βββ config/ # Configuration management
βββ utils/ # Utilities and helpers
β βββ metrics.py # Metrics collection and tracking
β βββ llm_utils.py # LLM provider utilities
β βββ prompt_logger.py # Comprehensive logging
βββ templates/ # Prompt templates for different use cases
- π€ Orchestrator: High-level coordination and planning
- π Planner: Task breakdown and strategy development
- β‘ Executor: Content generation and implementation
- π Reviewer: Analysis, validation, and quality assurance
- Create ability in
abilities/directory:
# abilities/my_custom_ability.py
def my_custom_ability(prompt: str) -> str:
# Your implementation
return "Custom result"- Register in
abilities/ability_registry.py:
register_ability("my-custom", my_custom_ability)- Use in tasks:
python main.py --objective "Use my custom ability to process this data"- Add JSON template to
templates/:
{
"name": "my_template",
"description": "My custom template",
"system_message": "You are a specialized agent...",
"task_prompt": "Given this objective: {objective}..."
}- Use with
--templateflag:
python main.py --template my_template --objective "Your objective here"- Market Research: Automated competitive analysis
- Technical Documentation: API documentation generation
- Academic Research: Literature review and synthesis
- Data Analysis: Database queries and insights
- Content Strategy: Blog posts, articles, social media
- Marketing Campaigns: Multi-channel campaign development
- SEO Optimization: Keyword research and content optimization
- Brand Analysis: Competitor benchmarking and positioning
- Performance Analytics: KPI tracking and optimization
- Customer Insights: Segmentation and behavior analysis
- Revenue Optimization: Pricing strategies and upselling
- Process Automation: Workflow optimization and streamlining
- Product Research: Market analysis and competitive intelligence
- Customer Journey: Experience optimization and conversion analysis
- Inventory Management: Demand forecasting and stock optimization
- Campaign Performance: Multi-channel attribution and ROI analysis
API Connection Errors
Problem: β LLM API Test Failed: API key is invalid
Solution:
- Check your
.envfile has the correct API key - Verify the API key has sufficient credits
- Test with provider's direct API
High Costs
Problem: Unexpectedly high token usage
Solution:
- Review metrics in
logs/prompts/files - Use shorter, more specific prompts
- Consider switching to more cost-effective models
- Enable caching where possible
Poor Performance
Problem: Slow execution or timeouts
Solution:
- Check internet connectivity
- Verify API rate limits
- Use
--verboseflag for debugging - Consider using faster models for simpler tasks
# Enable verbose logging
python main.py --objective "Your task" --verbose
# Check specific log files
tail -f logs/reasonloop_$(date +%Y%m%d)*.log- Python: 3.8 or higher
- Dependencies: See
requirements.txt - API Access: At least one LLM provider (XAI, OpenAI, Anthropic, or Ollama)
- Internet: Required for web-based abilities and API calls
- Memory: 2GB+ recommended for complex tasks
- API Keys: Stored in environment variables (
.env) - Data Handling: No persistent storage of sensitive data
- Logs: Automatic rotation and cleanup
- Privacy: Local processing where possible
MIT License - see LICENSE file for details
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
- Issues: Use GitHub Issues for bugs and feature requests
- Documentation: Check
docs/directory for detailed guides - Examples: See
examples/directory for usage patterns
- Web UI: Interactive dashboard for task management
- Streaming: Real-time response streaming
- Plugin System: Third-party ability integration
- Advanced Analytics: ML-powered performance insights
- Team Collaboration: Multi-user workspace support
- API Gateway: RESTful API for external integrations
Built with modern Python async architecture, inspired by autonomous agent frameworks, and designed for production-scale AI task orchestration.
Ready to automate your AI workflows? Start with a simple objective and see ReasonLoop break it down into actionable tasks! π