An AI agent that plays text-based adventure games autonomously. Currently supports Colossal Cave Adventure with a modular architecture designed for easy extension to other games and AI models.
The repository includes example configurations and run logs demonstrating the agent's behavior with very small models. Notably, smaller models can be observed getting stuck in loops, particularly around challenging areas like the slit in Colossal Cave Adventure.
- 🎮 Game-Agnostic Architecture: Modular design supports multiple text-based games
- 🤖 Multi-Model Support: Works with different AI providers (OpenAI, Anthropic, Google)
- 🛠️ Tool-Based AI Interaction: Uses OpenAI's function calling for structured game interactions
- 💾 Session Persistence: Save and resume game sessions automatically
- 🧠 Context Management: Conversation summarization to handle long gameplay sessions
- 📊 Comprehensive Logging: JSON Lines format for game interactions and structured logging
- ⚙️ Flexible Configuration: YAML-based configuration with environment variable support
- 🧪 Comprehensive Testing: Full test suite covering all components
- 🧠 Structured Logging: session management (stop/resume/save/load) messages are logged under
session_management, prompts underagent_prompt, and game output undergame_output. Initial game output is not repeated on resume. - 🧠 Human-readable Log Generation: Easy to read and parse JSON Lines logs.
- Python 3.8+
- Conda environment with required dependencies
- API key (set as environment variable)
- Clone the repository:
git clone <repository-url>
cd Nomad- Activate your conda environment:
conda activate nomad- Set your OpenAI API key:
export OPENAI_API_KEY="your-api-key-here"
export GEMINI_API_KEY="your-api-key-here"
export ANTHROPIC_API_KEY="your-api-key-here"- Edit the configuration as needed (you can adapt the examples):
model: "openai:gpt-4"
max_steps: 1000
...- Run the agent:
python agent.py my_config.yaml session/directoryThe agent will start playing the game and save its progress in the specified session directory.
- Games Layer: Standardized interface for text-based games with session management
- Models Layer: Abstraction over different AI providers with tool support
- Agents Layer: Game-playing logic with tool-based interactions and context management
- Utils Layer: Configuration, logging, and error handling
For detailed architecture documentation, see AGENTS.md.
The agent logs all game interactions in JSON Lines format for analysis:
{"step": 1, "action": "look", "response": "You are standing at the end of a road...", "score": 0, "timestamp": "2024-01-01T12:00:00"}
{"step": 2, "action": "go north", "response": "You are in a forest.", "score": 0, "timestamp": "2024-01-01T12:00:05"}conda activate nomad
PYTHONPATH=. python -m pytest -s tests/adding --api actually tests calling the provider APIs (might incur costs!).
Nomad/
├── AGENTS.md # Overall system architecture for LLM agents
├── README.md # This file
├── agent.py # Main script to run the agent
├── config_schema.json # JSON schema for configuration validation
├── initial_config.json # Example agent configuration
├── requirements.txt # Python dependencies
├── run_tests.sh # Script to execute tests
├── nomad/ # Core library code
│ ├── __init__.py
│ ├── agents/ # Agent logic
│ │ ├── __init__.py
│ │ ├── AGENTS.md # Technical details of the agents package
│ │ ├── base.py # BaseAgent class and tool decorators
│ │ └── colossal_cave.py # ColossalCaveAgent implementation
│ ├── games/ # Game interaction layer
│ │ ├── __init__.py
│ │ ├── AGENTS.md # Technical details of the games package
│ │ ├── base.py # BaseGame interface
│ │ └── colossal_cave.py # ColossalCaveGame implementation
│ ├── models/ # LLM abstraction layer
│ │ ├── __init__.py
│ │ ├── AGENTS.md # Technical details of the models package
│ │ ├── base.py # BaseModel interface, ToolDeclaration, ModelResponse
│ │ └── openai.py # OpenAIModel (multi-provider) implementation
│ └── utils/ # Utility functions and classes
│ ├── __init__.py
│ └── exceptions.py # Custom exception classes
└── tests/ # Automated tests
├── __init__.py
├── AGENTS.md # Technical details of the testing setup
├── conftest.py # Pytest configuration and hooks
├── agents/
│ ├── test_base_agent.py
│ └── test_colossal_cave_agent.py
├── games/
│ └── test_colossal_cave.py
└── models/
└── test_openai.py
- Implement the
BaseGameinterface ingames/ - Create a specialized agent in
agents/ - Add game selection logic to
agent.py
- Implement the
BaseModelinterface inmodels/ - Add model creation logic to
agent.py - Update configuration documentation
- Lint with flake8
- Format with Black
- Always update README.md and AGENTS.md after every change
- Write tests first (TDD approach)
- Use type hints for better code clarity
- Fork the repository
- Create a feature branch
- Write tests for new functionality
- Ensure all tests pass
- Submit a pull request
- Follow the existing architecture patterns
- Write tests first (TDD approach)
- Use type hints for better code clarity
- Comment extensively
- Document new features in AGENTS.md and update README.md
MIT
- Built on the adventure package for Colossal Cave Adventure
- Uses OpenAI's function calling for structured AI interactions