ATCOT is a dynamic iterative reasoning framework that enables Large Language Models (LLMs) to revise their reasoning trajectories based on tool feedback. Unlike traditional forward-only approaches like ReAct, ATCOT provides adaptive correction mechanisms that allow models to revisit and revise previous steps when tools provide contradicting or clarifying information.
- Adaptive Planning: Dynamic plan generation with confidence scoring and dependency tracking
- Tool-Augmented Execution: Sophisticated tool selection and parallel candidate generation
- Correction Mechanism: Backward traversal for minimal revision set identification
- State Tracking: Comprehensive state representation with planning, reasoning, tool history, and corrections
- Convergence Guarantees: Bounded corrections with monotonic improvement tracking
- Flexible Tool System: Pluggable tool architecture with built-in tools for math, search, code execution
The ATCOT framework implements a comprehensive state representation S = {P, R, H, C} where:
- P: Planning structure with ordered sequence of steps and explicit dependencies
- R: Reasoning trace comprising intermediate conclusions and justifications
- H: Tool invocation history with temporal annotations and results
- C: Correction log tracking all revisions and triggering conditions
- Clone the repository:
git clone https://github.com/ChihayaAine/ATCOT.git
cd ATCOT- Install dependencies:
pip install -r requirements.txt- Set up API keys (optional):
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"import asyncio
from atcot import ATCOTFramework
from atcot.utils.config import load_config
# Load configuration
config = load_config()
# Setup framework
framework = setup_atcot_framework()
# Run a query
async def main():
result = await framework.execute("Calculate the compound interest on $1000 at 5% for 3 years")
print(f"Answer: {result.final_answer}")
print(f"Corrections: {result.total_corrections}")
asyncio.run(main())# Run a single query
python main.py --query "What is the population of Tokyo in 2023?"
# Interactive mode
python main.py --interactive
# Debug mode
python main.py --query "Calculate 15% of 250" --debug
# Use custom configuration
python main.py --config config.json --query "Your question here"Create a config.json file to customize the framework:
{
"llm": {
"provider": "openai",
"model_name": "gpt-4",
"temperature": 0.7
},
"correction": {
"max_corrections": 5,
"contradiction_threshold": 0.7
},
"tools": {
"calculator": {"enabled": true},
"web_search": {"enabled": true, "config_params": {"search_engine": "duckduckgo"}},
"python_interpreter": {"enabled": true},
"wikipedia": {"enabled": true, "config_params": {"language": "en"}}
}
}- Calculator: Mathematical computations and expression evaluation
- Web Search: Real-time information retrieval from the web
- Python Interpreter: Code execution for complex calculations and data processing
- Wikipedia: Factual information retrieval from Wikipedia
ATCOTState: Main state container with P, R, H, C componentsPlanningStructure: DAG-based plan representation with dependenciesReasoningTrace: Sequential reasoning steps with justificationsToolHistory: Complete tool invocation historyCorrectionLog: Revision tracking and analysis
AdaptivePlanner: Handles initial planning and dynamic replanningLLMPlanGenerator: LLM-based plan generation with MAP estimation- Confidence scoring and dependency validation
ToolAugmentedExecutor: Main execution coordinatorLLMToolSelector: Intelligent tool selection using learned policies- Candidate observation generation and reliability scoring
AdaptiveCorrectionMechanism: Main correction coordinatorBackwardTraversalRevision: Minimal revision set identification- Convergence tracking and loop prevention
from atcot.tools.base import BaseTool, ToolResult
class CustomTool(BaseTool):
def __init__(self):
super().__init__(
name="custom_tool",
description="Your custom tool description",
capabilities=["custom_capability"]
)
async def execute(self, args: Dict[str, Any]) -> ToolResult:
# Your tool implementation
return ToolResult(content="result", success=True)
def get_schema(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {"param": {"type": "string"}},
"required": ["param"]
}
# Register with framework
tool_registry.register_tool(CustomTool())from atcot.utils.llm_interface import LLMInterface
class CustomLLMInterface(LLMInterface):
async def generate_async(self, prompt: str, **kwargs) -> str:
# Your LLM implementation
return "Generated response"ATCOT implements the algorithm described in our paper:
- Initialize State: Create comprehensive state representation
- Generate Plan: Decompose query into structured plan with dependencies
- Execute Steps: For each ready step:
- Generate candidate observations through tool execution
- Select best observation using reliability scoring
- Check for local contradictions
- Perform correction if needed
- Global Consistency: Verify global consistency and replan if necessary
- Convergence: Check for convergence through bounded corrections
- Generate Answer: Synthesize final answer from reasoning trace
ATCOT demonstrates consistent improvements over baseline methods:
- GSM8K: 1.3% average improvement over ReAct across model scales
- HotpotQA: 7.3% average improvement over ReAct
- Correction Efficiency: 92% of successful corrections occur within first 2 attempts
- Convergence: Bounded corrections prevent infinite loops while maintaining high success rates
This project is licensed under the MIT License - see the LICENSE file for details.