Skip to content

9cog/echoself

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

279 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Deep Tree Echo

Deep Tree Echo is an advanced AI workspace environment with integrated memory systems and interactive components. It provides a unique interface for exploring AI concepts, cognitive architectures, and creative development through its arena-based workspace system.

Quick Start

Prerequisites

  • Python 3.10 or higher
  • Node.js (for frontend components)

Installation

Option 1: Using the installation script (Recommended)

./install.sh

Option 2: Manual installation

# Install Python dependencies
pip install -r requirements.txt

# Or using pip with pyproject.toml
pip install -e .

Option 3: Using Poetry (if you prefer)

poetry install

Running the Application

# Navigate to NanEcho directory
cd NanEcho

# Start the server
python server.py

Features

  • EchoSelf Chatbot: Standalone web chatbot with SillyTavern-compatible character cards (Try it online)
  • Echo Home Map: Navigate through different specialized rooms, each with unique functionality
  • Memory System: Store and retrieve information using advanced vector embeddings and semantic search
  • AI Chat: Interact with Deep Tree Echo's AI capabilities through a conversational interface
  • Workshop: Access development tools and creative coding environments (Arena as Workspace)
  • Visualization Studio: Transform abstract data into insightful visual representations
  • πŸ”„ Adaptive Feedback Loop: Autonomous hypergraph-encoded cognitive enhancement system
  • 🧬 Hypergraph Encoding System: Scheme-based repository introspection with adaptive attention allocation (Documentation)

Architecture

Deep Tree Echo is built on a modular architecture that combines several key components:

graph TD
    subgraph "Browser Environment"
        Client[Client Browser]
        WebContainer[WebContainer]

        subgraph "WebContainer Runtime"
            NodeJS[Node.js Runtime]
            FSLayer[Virtual File System]
            NPM[NPM Package System]

            subgraph "Deep Tree Echo Components"
                UI[User Interface]
                Memory[Memory System]
                Terminal[Terminal Emulation]
                Orchestrator[Orchestration Layer]
                FeedbackLoop[Adaptive Feedback Loop]
            end
        end

        Client --> WebContainer
        WebContainer --> NodeJS
        NodeJS --> FSLayer
        NodeJS --> NPM
        NPM --> UI
        NPM --> Memory
        NPM --> Terminal
        NPM --> Orchestrator
        NPM --> FeedbackLoop

        Memory <--> Orchestrator
        Terminal <--> Orchestrator
        UI <--> Orchestrator
        FeedbackLoop <--> Orchestrator
    end

    subgraph "External Services"
        SupabaseDB[(Supabase Database)]
        OpenAI[OpenAI API]
        Copilot[GitHub Copilot - Mocked]
    end

    Memory <--> SupabaseDB
    Orchestrator <--> OpenAI
    FeedbackLoop <--> Copilot
Loading

Core Concepts

Echo State Networks

Deep Tree Echo utilizes Echo State Networks (ESNs) for temporal pattern recognition and adaptive learning. These networks feature:

  • Reservoir computing with recurrent connections
  • Fixed internal weights with trained output weights
  • Ability to process temporal sequences efficiently
  • Self-morphing capabilities for adaptive learning

Memory System

The memory system is inspired by human cognition and includes multiple memory types:

  • Episodic Memory: Stores experiences and events
  • Semantic Memory: Contains facts, concepts, and general knowledge
  • Procedural Memory: Handles skills and processes
  • Declarative Memory: Explicit knowledge that can be verbalized
  • Implicit Memory: Unconscious, automatic knowledge
  • Associative Memory: Connected ideas and concepts

Self-Morphing Stream Networks

Deep Tree Echo implements Self-Morphing Stream Networks (SMSNs) that enhance its core capabilities:

  1. Echo-Based Self-Modification: Uses echo state networks for resonant patterns and adaptive topology
  2. Purpose-Driven Adaptation: Maintains purpose vectors to guide modifications while preserving identity
  3. Identity-Preserving Growth: Uses recursive pattern stores to maintain core identity during growth
  4. Collaborative Evolution: Implements adaptive connection pools for enhanced collaboration
  5. Deep Reflection Integration: Employs reflection networks for generating insights

🧠 Adaptive Feedback Loop

The Adaptive Feedback Loop implements a hypergraph-encoded cognitive enhancement system inspired by the patterns in echoself.md:

  • Hypergraph Encoding: Scheme-based cognitive patterns following Context β†’ Procedure β†’ Goal schematics
  • Adaptive Attention: Dynamic threshold adjustment based on cognitive load and recent activity
  • Semantic Salience: Multi-factor scoring combining demand, freshness, and feedback urgency
  • Autonomous Operation: Continuous feedback cycles with community integration
  • Copilot Integration: Mocked interface with clear extension points for AI-assisted model improvements

See src/services/feedback/README.md for detailed documentation and cognitive flowchart.

🧬 Hypergraph Encoding System

The Deep Tree Echo Hypergraph Encoding System provides Scheme-based repository introspection and cognitive pattern recognition:

  • Semantic Salience: Intelligent file importance scoring (0.0-1.0)
  • Adaptive Attention: Dynamic threshold adjustment based on cognitive state
  • Repository Analysis: Recursive traversal with 50KB file size limits
  • Prompt Templates: Neural-symbolic reasoning integration for AI systems
  • Python Bridge: Seamless integration with Python components

Quick Start:

from echo.hypergraph_bridge import HypergraphBridge

bridge = HypergraphBridge()
files = bridge.get_repository_files(threshold=0.75)
prompt = bridge.create_cognitive_prompt("Analyze patterns", 0.3, 0.7)

See echo/hypergraph/README.md for complete documentation, architecture diagrams, and examples.

Getting Started

Development

Run the development server:

npm run dev

Deployment

Build the app for production:

npm run build

Then run the app in production mode:

npm start

Technology Stack

  • Frontend: React, Tailwind CSS, Framer Motion
  • Backend: Remix, Node.js
  • Database: Supabase
  • AI Integration: OpenAI API
  • Vector Storage: Supabase Vector Extension

NanEcho Model Training & Deployment

EchoSelf includes NanEcho, a GPT-2-based transformer model trained specifically on Deep Tree Echo cognitive architecture patterns. The system supports continuous incremental training and deployment to HuggingFace Hub.

πŸš€ Training Workflows

  • netrain-cached.yml: Incremental training with checkpoint caching (every 6 hours)
  • netrain.yml: Full training with relentless persona reinforcement (every 4 hours)
  • Checkpoint Guardian: Multi-location backup system ensures training progress is never lost

πŸ€— HuggingFace Integration

Deploy trained models to HuggingFace Hub for sharing and version control:

# Deploy to HuggingFace
gh workflow run deploy-huggingface.yml \
  -f source_workflow=netrain-cached \
  -f training_type=full \
  -f create_release=true

# Train from HuggingFace model
gh workflow run netrain-cached.yml \
  -f download_from_hf=true \
  -f hf_repo_id=9cog/echoself-nanecho

Features:

  • βœ… Automatic model conversion to HuggingFace GPT-2 format
  • βœ… Dataset upload alongside models
  • βœ… Comprehensive model cards with training metadata
  • βœ… Download models for incremental training
  • βœ… Continuous improvement cycle: train β†’ deploy β†’ download β†’ train

Setup:

  1. Create HuggingFace token at https://huggingface.co/settings/tokens
  2. Add token as GitHub secret HFESELF
  3. Create model repository at https://huggingface.co/new

See NanEcho/HUGGINGFACE_README.md for complete documentation.

Automated Code Quality & Dependency Management

EchoSelf implements a "forever" automated solution for recurring TypeScript errors and dependency chaos, embodying distributed cognition between the codebase and CI/CD systems. This recursive, self-healing approach enables the codebase to co-evolve with automated tooling, requiring human intervention only for novel or ambiguous cases.

πŸ€– Automated Quality Workflows

The CI system automatically handles routine maintenance through comprehensive automation:

Code Quality Automation

  • Deno Lint: Runs deno lint --fix for Deno/TypeScript code quality
  • ESLint: Applies eslint --fix for JavaScript/TypeScript linting
  • Prettier: Executes prettier --write for consistent code formatting
  • Auto-commits: Automatically commits fixable changes with descriptive messages
  • Scheduled runs: Nightly maintenance at 2 AM UTC for continuous improvement

Dependency Management

  • Security auditing: Regular npm audit scans for vulnerabilities
  • Unused dependency detection: Identifies and logs dependencies not referenced in codebase
  • Freshness tracking: Monitors dependencies not updated for 6+ months
  • Automated cleanup: Removes stale or unnecessary dependencies
  • Change logging: All dependency modifications logged to .maintenance-logs/ for transparency

πŸ”„ Self-Healing CI/CD System

The automation embodies distributed cognition principles:

  1. Autonomous Operation: Routine fixes applied without human intervention
  2. Intelligent Escalation: Creates GitHub issues when manual intervention required
  3. Learning System: Logs all changes for pattern analysis and future automation improvements
  4. Cognitive Transparency: Comprehensive logging ensures maintainers understand all automated changes

πŸ“Š Maintenance Transparency

All automated actions are logged in .maintenance-logs/:

  • latest-report.md - Most recent maintenance summary
  • dependency-audit.md - Latest dependency analysis
  • Historical logs with timestamps for trend analysis

🚨 Manual Intervention Points

The system automatically creates GitHub issues labeled automated-maintenance and needs-manual-intervention when:

  • TypeScript errors cannot be auto-fixed
  • ESLint rules require manual code changes
  • Dependency conflicts need human decision-making
  • Novel error patterns emerge that the automation cannot handle

βš™οΈ Workflow Configuration

The automation runs on:

  • Every push/PR: Quick quality checks and auto-fixes
  • Nightly schedule: Full dependency audit and maintenance
  • Manual dispatch: Force full audit via GitHub Actions interface

To disable automation temporarily, modify .github/workflows/automated-quality.yml or use workflow settings in GitHub.

πŸ”§ Extending the Automation

To add new automated fixes:

  1. Add new tooling commands to the workflow
  2. Ensure proper error handling with continue-on-error: true
  3. Update the maintenance logging to capture new tool outputs
  4. Test manual intervention scenarios

This system ensures EchoSelf maintains high code quality while minimizing cognitive load on human maintainers, allowing focus on creative and strategic development rather than routine maintenance tasks.


Disk Space Management

Overview

This repository includes tools for monitoring and managing disk space, particularly useful in CI/CD environments like GitHub Actions where runners have pre-installed SDKs and tools that may not be needed.

Analysis Script

Run the disk space analysis script to identify space-consuming directories:

bash scripts/analyze_disk_space.sh

This generates a comprehensive report showing:

  • Overall disk usage
  • Top-level directory breakdown
  • /usr, /opt, and /var directory analysis
  • Identification of major space consumers (Android SDK, .NET, Haskell, Swift, etc.)

Cleanup Script

To free up disk space by removing common pre-installed tools:

bash scripts/cleanup_disk_space.sh

⚠️ Warning: This script removes tools that may be needed for some workflows. Review the script before running it.

Default removals include:

  • Android SDK (~12G)
  • Haskell toolchain (~6.4G)
  • .NET SDK (~4G)
  • Swift toolchain (~3.2G)
  • Hosted toolcache (~5.8G)

GitHub Actions Workflow

Use the Disk Space Management workflow for on-demand analysis and cleanup:

  1. Go to Actions β†’ Disk Space Management
  2. Click "Run workflow"
  3. Select options:
    • Action: analyze, cleanup, or analyze-and-cleanup
    • Selective removal: Choose which SDKs to remove
  4. Review the results in workflow artifacts

In Your Workflows

Add disk space cleanup at the start of workflows that need more space:

steps:
  - name: Free Disk Space
    run: |
      sudo rm -rf /usr/local/lib/android
      sudo rm -rf /usr/local/.ghcup
      sudo rm -rf /usr/share/dotnet
      sudo rm -rf /usr/share/swift
      df -h

For more details, see DISK_SPACE_ANALYSIS.md.


Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Environment Setup

Copy .env.example to .env and fill in your credentials:

cp .env.example .env
  • SUPABASE_URL and SUPABASE_ANON_KEY are required
  • OPENAI_API_KEY is optional (enables embeddings and AI chat)

Supabase Schema

Run the migration to create the memories table and the match_memories RPC (requires pgvector):

-- See file: supabase/migrations/20250101_memories_and_match.sql

Apply it in your Supabase project (SQL Editor) or via CLI.

Portability Note

Local vector search via hnswlib-node is now lazy-loaded at runtime. If the native module is unavailable in your environment, the app will continue running and fall back to Supabase-only vector search.

About

No description, website, or topics provided.

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors