Practical examples for using ab, organized by use case.
- Basic Usage -
01_basic_usage.py- Get started in 60 seconds - Project Management -
02_project_management.py- Track decisions and progress - LLM Memory -
03_llm_memory.py- Enhance LLM conversations - Image Database -
04_image_database.py- Manage image collections - Graph Connections -
05_graph_connections.py- RFS memory web - Database Testing -
06_database_testing.py- Standard test databases - Cursor Extension -
07_cursor_extension.py- Stable working memory for dev agents - Color Identification -
08_color_identification.py- Semantic memory for colors - Pokémon Environment -
09_pokemon_environment.py- Long-term episodic memory for game AI - LLM Memory Effectiveness Testing -
10_llm_memory_effectiveness_testing.py- Objective metrics for memory effectiveness
# Run any example
python examples/01_basic_usage.py
python examples/02_project_management.py
# etc.Core features: storing memories, recalling context, emotional states, Council of Selves.
Project management: architecture decisions, feature planning, progress tracking.
LLM enhancement: persistent memory, user preferences, emotional context.
Image database: metadata storage, similarity search, multi-modal memory.
Graph features: card connections, auto-linking, graph traversal.
Database testing: standard datasets (Chinook), relational structures.
Cursor Extension - Stable Working Memory for Development Agents
Demonstrates how ab1.1 provides stable working memory for Cursor's AI dev agent:
- Awareness cards per meaningful interaction (user messages, file changes, tool runs)
- Multiple selves (planner, architect, executor) with specialized memory streams
- Session continuity via recall
- Task stack tracking across moments
Use Case: Make Cursor's AI dev agent feel like it has stable working memory across a session, remembering what you're doing, where files live, and ongoing tasks.
Key Features:
store_awareness()for moment creationrun_council()for multi-agent suggestionsstore_card()withowner_selffor self memory streamsrecall_shape("awareness_last_N")for session continuity- Card connections for linking related tasks
Color Identification - Semantic Memory for Color Experiences
Demonstrates how ab1.1 provides semantic memory for color experiences:
- Color observation cards with RGB/HSV/hex data
- Specialized selves (color_namer, color_stylist) subscribing to color buffers
- Similarity-based recall for color matching
- Context-aware color naming (app, screen, user feedback)
Use Case: Build a semantic memory system for color experiences, learning canonical names and contexts over time.
Key Features:
- Buffer channels (visual channel for color data)
- Custom selves with buffer subscriptions
- Similarity-based recall and auto-linking
- Context buffers for app/screen/user feedback
- Graph traversal to find color clusters
Pokémon-Style Environment - Long-Term Episodic Memory for Game AI
Demonstrates how ab1.1 provides long-term episodic memory for game AI:
- Awareness card per game tick with full state
- Specialized selves (pathfinder, socializer, strategist) with persistent memory
- Long-term recall of area knowledge
- Forgetting mechanism (rarely-visited areas decay)
Use Case: Create a game AI with persistent, evolving memory that remembers areas, NPCs, and strategies across sessions.
Key Features:
- Moment-per-tick timeline
- Self memory streams (persistent knowledge per self)
- Area-based recall filtering
- Card_stats decay (forgetting mechanism)
- Graph connections for spatial relationships
LLM Memory Effectiveness Testing - Objective Metrics Framework
Comprehensive testing framework for evaluating long-term memory effectiveness on LLM-based agents:
- Structured test moments with inputs, expected outputs, and evaluation criteria
- Proper handling of ab system outputs (memory bundles, master_input, excess_inputs)
- Objective metrics: accuracy, relevance, completeness, consistency
- Integration with LLM APIs (OpenAI, Anthropic, mock)
- Master input structure following ab's moment system
Use Case: Test and measure how memory affects LLM behavior with objective, reproducible metrics.
Key Features:
- `TestMoment: structured test scenarios with expected outcomes
MemoryContextFormatter: formats ab memory bundles for LLM contextObjectiveMetrics: calculates accuracy, relevance, completeness, consistencyLLMMemoryTester: orchestrates test execution with memory vs baseline comparison- Master input/excess inputs handling for moment-to-moment continuity
After running examples, see:
docs/README.md- Full documentationdocs/guides/QUICK_START.md- Quick start guidedocs/API.md- API reference