Skip to content

A biomimetic memory server for AI Agents implementing human-like STM/LTM funnel, cognitive filtering, and Ebbinghaus forgetting curve. (Go + Vue)

License

Notifications You must be signed in to change notification settings

xwj-vic/AI-Memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 AI-Memory

License: MIT Go Version Redis Qdrant

δΈ­ζ–‡ζ–‡ζ‘£ | English

alt text

A biomimetic AI memory management framework that implements a human-like funnel memory system (STM β†’ Staging β†’ LTM), enabling AI agents to intelligently filter, retain, and recall valuable information.


🎯 Core Problem

Traditional AI conversation systems face critical memory challenges:

  • πŸ’Έ Memory Dilemma: Full retention is expensive; rapid forgetting breaks conversation continuity
  • πŸ—‘οΈ Information Noise: Unable to distinguish valuable insights from trivial interactions
  • ❄️ Cold Start: Every conversation starts from zero, preventing long-term relationship building

AI-Memory solves these problems with a biologically-inspired architecture that automatically manages memory lifecycleβ€”just like the human brain.


✨ Key Features

🧠 Biomimetic Funnel Architecture

Mimics human memory processes with three-tier filtering:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  STM (Short-Term Memory)  β”‚  Redis Sliding Window       β”‚
β”‚  ↓ Recent conversations   β”‚  Configurable 7-day TTL     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Staging Area             β”‚  Multi-Criteria Filtering   β”‚
β”‚  ↓ Value judgment         β”‚  β€’ Recurrence count         β”‚
β”‚                           β”‚  β€’ Time window verification β”‚
β”‚                           β”‚  β€’ LLM-based scoring        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  LTM (Long-Term Memory)   β”‚  Qdrant Vector Store        β”‚
β”‚  βœ“ Core knowledge         β”‚  Semantic search enabled    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🎯 Intelligent Value Judgment

  • Multi-Dimensional Scoring: LLM evaluates memory importance, relevance, and uniqueness
  • Recurrence Validation: Ideas repeated across sessions are more likely to be important
  • Time Window: Prevents impulsive promotion, ensures stability
  • Confidence Grading: Auto-promote high-confidence memories, auto-discard low-value noise

♻️ Semantic Deduplication

  • Staging Dedup: Prevents duplicate memories from entering the funnel
  • LTM Pre-Promotion Check: Ensures uniqueness before final storage
  • Hybrid Approach: Vector similarity + LLM semantic comparison

πŸ“‰ Automatic Decay & Forgetting

  • Ebbinghaus Curve: Simulates natural memory decay over time
  • Configurable Half-Life: Adjust decay rate based on use case
  • Auto-Cleanup: Removes low-value memories below threshold score

πŸ“Š Monitoring & Dashboard

Real-time visibility into the memory system's health and performance:

  • Metric Tracking: Promotion rates, queue lengths, cache hit rates
  • Visual Trends: 24-hour trend lines for key activities
  • System Status: Live component health checks (Redis, Qdrant)

Monitoring Dashboard Memory Statistics Staging Area Review Admin Control Panel

🚨 Intelligent Alert System

Automatically monitor memory system health and detect potential issues:

  • Dynamic Rule Configuration: Real-time adjustment of alert thresholds and cooldown periods via Web UI, no restart required
  • Multi-Level Alerts: Support for ERROR/WARNING/INFO severity levels
  • Built-in Rules:
    • Queue backlog detection (Staging queue too long)
    • Low promotion success rate
    • Cache hit rate anomalies
    • Memory decay spike detection
  • Trend Visualization: 24-hour alert trend charts with ECharts optimization
  • Persistent Statistics: Rule execution counts and notification success rates stored in database, data retained across service restarts

🌐 Internationalization Support

Comprehensive multi-language support:

  • Bilingual Interface: Chinese and English support
  • One-Click Switching: Quick language toggle in top navigation
  • Complete Translation: All pages, buttons, and messages fully translated
  • Localized Storage: Language preference automatically remembered

πŸ”§ Production-Ready Features

  • Multi-Store Coordination: Redis (speed) + MySQL (structure) + Qdrant (semantics)
  • Fully Configurable: All thresholds and timeouts via environment variables
  • Background Automation: Scheduled tasks for staging promotion and decay cleanup
  • Admin Dashboard: Vue.js frontend for memory management and monitoring

πŸš€ Quick Start

Prerequisites

  • Go 1.25+
  • Redis 7.0+
  • MySQL 8.0+
  • Qdrant 1.0+ (Vector database)
  • OpenAI API Key (or compatible endpoint like SiliconFlow)

Installation

# Clone the repository
git clone https://github.com/xwj-vic/AI-Memory.git
cd AI-Memory

# Copy and configure environment variables
cp .env.example .env
# Edit .env with your API keys and database credentials

# Run database schema
mysql -u root -p < schema.sql

# Install dependencies
go mod download

# Build the project
go build -o ai-memory

# Start the server
./ai-memory

The server will start on http://localhost:8080

Default Admin Credentials:

  • Username: admin
  • Password: admin123

🐳 Docker Deployment (Recommended)

One-command deployment with Docker Compose:

# Clone the repository
git clone https://github.com/xwj-vic/AI-Memory.git
cd AI-Memory

# Configure your environment (API keys, etc.)
cp .env.example .env
# Edit .env and set OPENAI_API_KEY

# Start all services
cd docker && docker-compose up -d

# View logs
docker-compose logs -f app

This will start:

  • AI-Memory App on port 8080
  • Redis for short-term memory
  • MySQL for metadata and metrics
  • Qdrant for vector search

To stop all services:

docker-compose down

πŸ“– Architecture Overview

Data Flow

graph LR
    A[User Input] --> B[STM Redis]
    B --> C{Background Judge}
    C -->|Value Check| D[Staging Store]
    D --> E{Promotion Logic}
    E -->|Recurrence + Score| F[LTM Qdrant]
    E -->|Low Value| G[Discard]
    F --> H{Decay Check}
    H -->|Score Drop| I[Auto Evict]
    
    style A fill:#e1f5ff
    style B fill:#fff4e6
    style D fill:#fff9c4
    style F fill:#c8e6c9
    style I fill:#ffcdd2
Loading

Storage Layers

Layer Storage Purpose TTL
STM Redis Recent conversation context 7 days (configurable)
Staging Redis Hash Value judgment queue Until promoted/discarded
LTM Qdrant Vector DB Long-term knowledge base Decay-based (90-day half-life)
Metadata MySQL User profiles, system state Permanent

πŸ’‘ Usage Example

Adding Memory

curl -X POST http://localhost:8080/api/memory/add \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user123",
    "session_id": "session456",
    "input": "I love hiking in the mountains",
    "output": "That sounds wonderful! What mountains do you usually visit?",
    "metadata": {"topic": "hobbies"}
  }'

Retrieving Relevant Memories

curl -X GET "http://localhost:8080/api/memory/retrieve?user_id=user123&query=outdoor%20activities&limit=5"

Response Format

{
  "memories": [
    {
      "id": "uuid-xxxx",
      "content": "User enjoys hiking in mountainous regions",
      "type": "ltm",
      "metadata": {
        "ltm_metadata": {
          "importance": 0.85,
          "last_accessed": "2025-12-16T10:30:00Z",
          "access_count": 12
        }
      },
      "created_at": "2025-12-01T08:00:00Z"
    }
  ]
}

βš™οΈ Configuration

Key environment variables in .env:

Memory Funnel Settings

# STM Configuration
STM_EXPIRATION_DAYS=7              # Auto-expire after N days
STM_WINDOW_SIZE=100               # Max recent messages
STM_BATCH_JUDGE_SIZE=10           # Batch processing size
STM_JUDGE_MIN_MESSAGES=5          # Trigger judge if msg count >= N
STM_JUDGE_MAX_WAIT_MINUTES=60     # Trigger judge if oldest msg wait >= N mins

# Staging Area
STAGING_MIN_OCCURRENCES=2         # Requires repetition
STAGING_MIN_WAIT_HOURS=48         # Cooling period
STAGING_VALUE_THRESHOLD=0.6       # Min score to promote
STAGING_CONFIDENCE_HIGH=0.8       # Auto-promote threshold
STAGING_CONFIDENCE_LOW=0.5        # Auto-discard threshold

# LTM Decay
LTM_DECAY_HALF_LIFE_DAYS=90       # Decay rate
LTM_DECAY_MIN_SCORE=0.3           # Eviction threshold

LLM Provider

LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_MODEL=gpt-4o-mini
OPENAI_EMBEDDING_MODEL=text-embedding-ada-002

πŸ’‘ Tip: For cost optimization, use gpt-4o-mini for judgment tasks and gpt-4o only for critical extraction tasks.


🎨 Admin Dashboard

Access the web UI at http://localhost:8080 after starting the server.

Features:

  • πŸ“Š Memory statistics and trends
  • πŸ” Search and filter memories by type/user
  • ✏️ Edit or delete specific memories
  • πŸ‘₯ User management and session tracking
  • 🚨 Alert Center: Configure alert rules, view real-time alerts and trends
  • 🌐 Multi-language Support: Switch between Chinese and English

πŸ—οΈ Project Structure

ai-memory/
β”œβ”€β”€ cmd/                    # CLI tools
β”œβ”€β”€ pkg/
β”‚   β”œβ”€β”€ api/               # REST API handlers
β”‚   β”œβ”€β”€ auth/              # Authentication service
β”‚   β”œβ”€β”€ config/            # Configuration loader
β”‚   β”œβ”€β”€ llm/               # LLM client abstraction
β”‚   β”œβ”€β”€ logger/            # Structured logging
β”‚   β”œβ”€β”€ memory/            # Core memory logic
β”‚   β”‚   β”œβ”€β”€ manager.go     # Memory manager
β”‚   β”‚   β”œβ”€β”€ funnel.go      # Funnel system logic
β”‚   β”‚   β”œβ”€β”€ ltm_dedup.go   # LTM deduplication
β”‚   β”‚   └── interfaces.go  # Abstractions
β”‚   β”œβ”€β”€ store/             # Storage implementations
β”‚   β”‚   β”œβ”€β”€ redis.go       # STM store
β”‚   β”‚   β”œβ”€β”€ qdrant.go      # Vector store
β”‚   β”‚   β”œβ”€β”€ mysql.go       # Metadata store
β”‚   β”‚   └── staging_store.go # Staging logic
β”‚   └── types/             # Shared data models
β”œβ”€β”€ frontend/              # Vue.js admin dashboard
β”œβ”€β”€ schema.sql             # MySQL database schema
β”œβ”€β”€ .env.example           # Configuration template
└── main.go                # Application entry point

🀝 Contributing

We welcome contributions! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Guidelines

  • Follow Go best practices and idiomatic style
  • Add tests for new features
  • Update documentation for API changes
  • Use meaningful commit messages

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ™ Acknowledgments

  • Qdrant for the powerful vector search engine
  • OpenAI for providing advanced LLM capabilities
  • Inspired by research on human memory and cognitive psychology

πŸ“¬ Contact


Made with ❀️ for the AI community

About

A biomimetic memory server for AI Agents implementing human-like STM/LTM funnel, cognitive filtering, and Ebbinghaus forgetting curve. (Go + Vue)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published