Skip to content

Latest commit

 

History

History

README.md

Examples

Practical examples for using ab, organized by use case.

Quick Start

  1. Basic Usage - 01_basic_usage.py - Get started in 60 seconds
  2. Project Management - 02_project_management.py - Track decisions and progress
  3. LLM Memory - 03_llm_memory.py - Enhance LLM conversations
  4. Image Database - 04_image_database.py - Manage image collections
  5. Graph Connections - 05_graph_connections.py - RFS memory web
  6. Database Testing - 06_database_testing.py - Standard test databases
  7. Cursor Extension - 07_cursor_extension.py - Stable working memory for dev agents
  8. Color Identification - 08_color_identification.py - Semantic memory for colors
  9. Pokémon Environment - 09_pokemon_environment.py - Long-term episodic memory for game AI
  10. LLM Memory Effectiveness Testing - 10_llm_memory_effectiveness_testing.py - Objective metrics for memory effectiveness

Running Examples

# Run any example
python examples/01_basic_usage.py
python examples/02_project_management.py
# etc.

Example Descriptions

01_basic_usage.py

Core features: storing memories, recalling context, emotional states, Council of Selves.

02_project_management.py

Project management: architecture decisions, feature planning, progress tracking.

03_llm_memory.py

LLM enhancement: persistent memory, user preferences, emotional context.

04_image_database.py

Image database: metadata storage, similarity search, multi-modal memory.

05_graph_connections.py

Graph features: card connections, auto-linking, graph traversal.

06_database_testing.py

Database testing: standard datasets (Chinook), relational structures.

07_cursor_extension.py

Cursor Extension - Stable Working Memory for Development Agents

Demonstrates how ab1.1 provides stable working memory for Cursor's AI dev agent:

  • Awareness cards per meaningful interaction (user messages, file changes, tool runs)
  • Multiple selves (planner, architect, executor) with specialized memory streams
  • Session continuity via recall
  • Task stack tracking across moments

Use Case: Make Cursor's AI dev agent feel like it has stable working memory across a session, remembering what you're doing, where files live, and ongoing tasks.

Key Features:

  • store_awareness() for moment creation
  • run_council() for multi-agent suggestions
  • store_card() with owner_self for self memory streams
  • recall_shape("awareness_last_N") for session continuity
  • Card connections for linking related tasks

08_color_identification.py

Color Identification - Semantic Memory for Color Experiences

Demonstrates how ab1.1 provides semantic memory for color experiences:

  • Color observation cards with RGB/HSV/hex data
  • Specialized selves (color_namer, color_stylist) subscribing to color buffers
  • Similarity-based recall for color matching
  • Context-aware color naming (app, screen, user feedback)

Use Case: Build a semantic memory system for color experiences, learning canonical names and contexts over time.

Key Features:

  • Buffer channels (visual channel for color data)
  • Custom selves with buffer subscriptions
  • Similarity-based recall and auto-linking
  • Context buffers for app/screen/user feedback
  • Graph traversal to find color clusters

09_pokemon_environment.py

Pokémon-Style Environment - Long-Term Episodic Memory for Game AI

Demonstrates how ab1.1 provides long-term episodic memory for game AI:

  • Awareness card per game tick with full state
  • Specialized selves (pathfinder, socializer, strategist) with persistent memory
  • Long-term recall of area knowledge
  • Forgetting mechanism (rarely-visited areas decay)

Use Case: Create a game AI with persistent, evolving memory that remembers areas, NPCs, and strategies across sessions.

Key Features:

  • Moment-per-tick timeline
  • Self memory streams (persistent knowledge per self)
  • Area-based recall filtering
  • Card_stats decay (forgetting mechanism)
  • Graph connections for spatial relationships

10_llm_memory_effectiveness_testing.py

LLM Memory Effectiveness Testing - Objective Metrics Framework

Comprehensive testing framework for evaluating long-term memory effectiveness on LLM-based agents:

  • Structured test moments with inputs, expected outputs, and evaluation criteria
  • Proper handling of ab system outputs (memory bundles, master_input, excess_inputs)
  • Objective metrics: accuracy, relevance, completeness, consistency
  • Integration with LLM APIs (OpenAI, Anthropic, mock)
  • Master input structure following ab's moment system

Use Case: Test and measure how memory affects LLM behavior with objective, reproducible metrics.

Key Features:

  • `TestMoment: structured test scenarios with expected outcomes
  • MemoryContextFormatter: formats ab memory bundles for LLM context
  • ObjectiveMetrics: calculates accuracy, relevance, completeness, consistency
  • LLMMemoryTester: orchestrates test execution with memory vs baseline comparison
  • Master input/excess inputs handling for moment-to-moment continuity

Next Steps

After running examples, see:

  • docs/README.md - Full documentation
  • docs/guides/QUICK_START.md - Quick start guide
  • docs/API.md - API reference