Skip to content

Tanguy-SALMON/ReasonLoop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

27 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ReasonLoop πŸ€–

A modular AI agent system with comprehensive metrics tracking, multi-provider LLM support, and intelligent task orchestration.

Python 3.8+ License: MIT

✨ Features

  • πŸ€– Multi-Agent Orchestration: Intelligent task breakdown and execution
  • πŸ“Š Real-Time Metrics: Actual token usage, costs, and performance tracking
  • πŸ”„ Multi-Provider Support: XAI (Grok), OpenAI, Anthropic, Ollama, Z.ai
  • ⚑ Async Architecture: Concurrent task execution with proper resource management
  • 🎯 Role-Based Models: Different AI models optimized for planning, execution, and review
  • πŸ“ˆ Comprehensive Analytics: Session-based metrics with cost optimization insights
  • πŸ›‘οΈ Production Ready: Comprehensive error handling and connection validation

πŸš€ Quick Start

1. Installation

# Clone the repository
git clone <repository-url>
cd ReasonLoop

# Install dependencies
pip install -r requirements.txt

# Or use poetry (recommended)
poetry install

2. Configuration

Create your .env file:

cp .env.example .env
# Edit .env with your settings

XAI Configuration (Recommended)

# LLM Provider
LLM_PROVIDER=xai

# API Configuration
XAI_API_KEY=your-xai-api-key
XAI_MODEL=grok-4-1-fast-non-reasoning

# Role-based models (automatically selected based on task type)
XAI_MODEL_ORCHESTRATOR=grok-4-1-fast-non-reasoning
XAI_MODEL_PLANNER=grok-4-1-fast-non-reasoning
XAI_MODEL_EXECUTOR=grok-4-1-fast-non-reasoning
XAI_MODEL_REVIEWER=grok-4-1-fast-non-reasoning

# Basic settings
LLM_TEMPERATURE=0.7
LLM_MAX_TOKENS=4096

Other Providers

OpenAI Configuration
LLM_PROVIDER=openai
OPENAI_API_KEY=your-api-key
OPENAI_MODEL=gpt-4
Anthropic Configuration
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-api-key
ANTHROPIC_MODEL=claude-sonnet-3.5
Ollama Configuration
LLM_PROVIDER=ollama
OLLAMA_API_URL=http://localhost:11434/api/generate
OLLAMA_MODEL=llama3

3. Test Your Setup

# List available abilities
python main.py --list-abilities

# Test with simple objective
python main.py --objective "What is Python?" --verbose

🎯 Usage

Basic Examples

# Simple task execution
python main.py --objective "Explain quantum computing in simple terms"

# Research and analysis
python main.py --objective "Research the latest trends in artificial intelligence and create a comprehensive report"

# Creative content generation
python main.py --objective "Write a marketing strategy for a sustainable fashion startup"

# Technical documentation
python main.py --objective "Create a technical specification for a REST API"

Advanced Usage

# Use specific template
python main.py --template default_tasks --objective "Analyze competitor pricing strategies"

# Verbose logging for debugging
python main.py --objective "Create a business plan" --verbose

# Custom model selection
python main.py --objective "Debug this Python code" --model gpt-4-turbo

Template Examples

Available Templates
  • default_tasks: General-purpose task execution
  • marketing_insights: Marketing analysis and strategy
  • revenue_optimization: Revenue-focused analysis
  • ecommerce_metrics: E-commerce performance analysis
  • autonomous_ecommerce_growth: Comprehensive e-commerce growth strategy

πŸ“Š Metrics & Analytics

ReasonLoop provides comprehensive metrics tracking:

Real-Time Metrics

  • Token Usage: Actual tokens consumed from LLM APIs
  • Cost Tracking: Real USD costs from provider APIs
  • Performance: Tokens per second, response times
  • Provider Efficiency: Multi-provider comparison

Session Analytics

{
  "usage": {
    "prompt_tokens": 407,
    "completion_tokens": 1173,
    "total_tokens": 1580,
    "model": "grok-4-1-fast-non-reasoning",
    "provider": "xai",
    "cost_usd": 0.64375,
    "usage_source": "api"
  }
}

Metrics Files

  • Prompt Logs: logs/prompts/YYYYMMDD_HHMMSS_ability_taskN_prompt.json
  • Session Metrics: logs/metrics/session_ID_timestamp.json
  • Application Logs: logs/reasonloop_timestamp.log

πŸ—οΈ Architecture

Core Components

ReasonLoop/
β”œβ”€β”€ abilities/              # Individual AI capabilities
β”‚   β”œβ”€β”€ text_completion.py  # LLM integration with metrics
β”‚   β”œβ”€β”€ web_search.py       # Web search functionality
β”‚   β”œβ”€β”€ web_scrape.py       # Content extraction
β”‚   └── ...
β”œβ”€β”€ core/                   # Execution engine
β”‚   β”œβ”€β”€ execution_loop.py   # Main orchestration logic
β”‚   └── task_manager.py     # Task breakdown and execution
β”œβ”€β”€ config/                 # Configuration management
β”œβ”€β”€ utils/                  # Utilities and helpers
β”‚   β”œβ”€β”€ metrics.py         # Metrics collection and tracking
β”‚   β”œβ”€β”€ llm_utils.py       # LLM provider utilities
β”‚   └── prompt_logger.py    # Comprehensive logging
└── templates/             # Prompt templates for different use cases

Agent Roles & Model Selection

  • πŸ€– Orchestrator: High-level coordination and planning
  • πŸ“‹ Planner: Task breakdown and strategy development
  • ⚑ Executor: Content generation and implementation
  • πŸ” Reviewer: Analysis, validation, and quality assurance

πŸ”§ Development

Adding New Abilities

  1. Create ability in abilities/ directory:
# abilities/my_custom_ability.py
def my_custom_ability(prompt: str) -> str:
    # Your implementation
    return "Custom result"
  1. Register in abilities/ability_registry.py:
register_ability("my-custom", my_custom_ability)
  1. Use in tasks:
python main.py --objective "Use my custom ability to process this data"

Creating Templates

  1. Add JSON template to templates/:
{
  "name": "my_template",
  "description": "My custom template",
  "system_message": "You are a specialized agent...",
  "task_prompt": "Given this objective: {objective}..."
}
  1. Use with --template flag:
python main.py --template my_template --objective "Your objective here"

πŸ“ˆ Use Cases

Research & Analysis

  • Market Research: Automated competitive analysis
  • Technical Documentation: API documentation generation
  • Academic Research: Literature review and synthesis
  • Data Analysis: Database queries and insights

Content & Marketing

  • Content Strategy: Blog posts, articles, social media
  • Marketing Campaigns: Multi-channel campaign development
  • SEO Optimization: Keyword research and content optimization
  • Brand Analysis: Competitor benchmarking and positioning

Business Intelligence

  • Performance Analytics: KPI tracking and optimization
  • Customer Insights: Segmentation and behavior analysis
  • Revenue Optimization: Pricing strategies and upselling
  • Process Automation: Workflow optimization and streamlining

E-commerce

  • Product Research: Market analysis and competitive intelligence
  • Customer Journey: Experience optimization and conversion analysis
  • Inventory Management: Demand forecasting and stock optimization
  • Campaign Performance: Multi-channel attribution and ROI analysis

πŸ› οΈ Troubleshooting

Common Issues

API Connection Errors

Problem: βœ— LLM API Test Failed: API key is invalid

Solution:

  1. Check your .env file has the correct API key
  2. Verify the API key has sufficient credits
  3. Test with provider's direct API
High Costs

Problem: Unexpectedly high token usage

Solution:

  1. Review metrics in logs/prompts/ files
  2. Use shorter, more specific prompts
  3. Consider switching to more cost-effective models
  4. Enable caching where possible
Poor Performance

Problem: Slow execution or timeouts

Solution:

  1. Check internet connectivity
  2. Verify API rate limits
  3. Use --verbose flag for debugging
  4. Consider using faster models for simpler tasks

Debug Mode

# Enable verbose logging
python main.py --objective "Your task" --verbose

# Check specific log files
tail -f logs/reasonloop_$(date +%Y%m%d)*.log

πŸ“‹ Requirements

  • Python: 3.8 or higher
  • Dependencies: See requirements.txt
  • API Access: At least one LLM provider (XAI, OpenAI, Anthropic, or Ollama)
  • Internet: Required for web-based abilities and API calls
  • Memory: 2GB+ recommended for complex tasks

πŸ”’ Security & Privacy

  • API Keys: Stored in environment variables (.env)
  • Data Handling: No persistent storage of sensitive data
  • Logs: Automatic rotation and cleanup
  • Privacy: Local processing where possible

πŸ“„ License

MIT License - see LICENSE file for details

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

πŸ“ž Support

  • Issues: Use GitHub Issues for bugs and feature requests
  • Documentation: Check docs/ directory for detailed guides
  • Examples: See examples/ directory for usage patterns

πŸ—ΊοΈ Roadmap

  • Web UI: Interactive dashboard for task management
  • Streaming: Real-time response streaming
  • Plugin System: Third-party ability integration
  • Advanced Analytics: ML-powered performance insights
  • Team Collaboration: Multi-user workspace support
  • API Gateway: RESTful API for external integrations

πŸ™ Acknowledgments

Built with modern Python async architecture, inspired by autonomous agent frameworks, and designed for production-scale AI task orchestration.


Ready to automate your AI workflows? Start with a simple objective and see ReasonLoop break it down into actionable tasks! πŸš€

About

Yet another Agentic framework built with AI

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages