Deep Tree Echo is an advanced AI workspace environment with integrated memory systems and interactive components. It provides a unique interface for exploring AI concepts, cognitive architectures, and creative development through its arena-based workspace system.
- Python 3.10 or higher
- Node.js (for frontend components)
./install.sh# Install Python dependencies
pip install -r requirements.txt
# Or using pip with pyproject.toml
pip install -e .poetry install# Navigate to NanEcho directory
cd NanEcho
# Start the server
python server.py- EchoSelf Chatbot: Standalone web chatbot with SillyTavern-compatible character cards (Try it online)
- Echo Home Map: Navigate through different specialized rooms, each with unique functionality
- Memory System: Store and retrieve information using advanced vector embeddings and semantic search
- AI Chat: Interact with Deep Tree Echo's AI capabilities through a conversational interface
- Workshop: Access development tools and creative coding environments (Arena as Workspace)
- Visualization Studio: Transform abstract data into insightful visual representations
- π Adaptive Feedback Loop: Autonomous hypergraph-encoded cognitive enhancement system
- 𧬠Hypergraph Encoding System: Scheme-based repository introspection with adaptive attention allocation (Documentation)
Deep Tree Echo is built on a modular architecture that combines several key components:
graph TD
subgraph "Browser Environment"
Client[Client Browser]
WebContainer[WebContainer]
subgraph "WebContainer Runtime"
NodeJS[Node.js Runtime]
FSLayer[Virtual File System]
NPM[NPM Package System]
subgraph "Deep Tree Echo Components"
UI[User Interface]
Memory[Memory System]
Terminal[Terminal Emulation]
Orchestrator[Orchestration Layer]
FeedbackLoop[Adaptive Feedback Loop]
end
end
Client --> WebContainer
WebContainer --> NodeJS
NodeJS --> FSLayer
NodeJS --> NPM
NPM --> UI
NPM --> Memory
NPM --> Terminal
NPM --> Orchestrator
NPM --> FeedbackLoop
Memory <--> Orchestrator
Terminal <--> Orchestrator
UI <--> Orchestrator
FeedbackLoop <--> Orchestrator
end
subgraph "External Services"
SupabaseDB[(Supabase Database)]
OpenAI[OpenAI API]
Copilot[GitHub Copilot - Mocked]
end
Memory <--> SupabaseDB
Orchestrator <--> OpenAI
FeedbackLoop <--> Copilot
Deep Tree Echo utilizes Echo State Networks (ESNs) for temporal pattern recognition and adaptive learning. These networks feature:
- Reservoir computing with recurrent connections
- Fixed internal weights with trained output weights
- Ability to process temporal sequences efficiently
- Self-morphing capabilities for adaptive learning
The memory system is inspired by human cognition and includes multiple memory types:
- Episodic Memory: Stores experiences and events
- Semantic Memory: Contains facts, concepts, and general knowledge
- Procedural Memory: Handles skills and processes
- Declarative Memory: Explicit knowledge that can be verbalized
- Implicit Memory: Unconscious, automatic knowledge
- Associative Memory: Connected ideas and concepts
Deep Tree Echo implements Self-Morphing Stream Networks (SMSNs) that enhance its core capabilities:
- Echo-Based Self-Modification: Uses echo state networks for resonant patterns and adaptive topology
- Purpose-Driven Adaptation: Maintains purpose vectors to guide modifications while preserving identity
- Identity-Preserving Growth: Uses recursive pattern stores to maintain core identity during growth
- Collaborative Evolution: Implements adaptive connection pools for enhanced collaboration
- Deep Reflection Integration: Employs reflection networks for generating insights
The Adaptive Feedback Loop implements a hypergraph-encoded cognitive enhancement system inspired by the patterns in echoself.md:
- Hypergraph Encoding: Scheme-based cognitive patterns following Context β Procedure β Goal schematics
- Adaptive Attention: Dynamic threshold adjustment based on cognitive load and recent activity
- Semantic Salience: Multi-factor scoring combining demand, freshness, and feedback urgency
- Autonomous Operation: Continuous feedback cycles with community integration
- Copilot Integration: Mocked interface with clear extension points for AI-assisted model improvements
See src/services/feedback/README.md for detailed documentation and cognitive flowchart.
The Deep Tree Echo Hypergraph Encoding System provides Scheme-based repository introspection and cognitive pattern recognition:
- Semantic Salience: Intelligent file importance scoring (0.0-1.0)
- Adaptive Attention: Dynamic threshold adjustment based on cognitive state
- Repository Analysis: Recursive traversal with 50KB file size limits
- Prompt Templates: Neural-symbolic reasoning integration for AI systems
- Python Bridge: Seamless integration with Python components
Quick Start:
from echo.hypergraph_bridge import HypergraphBridge
bridge = HypergraphBridge()
files = bridge.get_repository_files(threshold=0.75)
prompt = bridge.create_cognitive_prompt("Analyze patterns", 0.3, 0.7)See echo/hypergraph/README.md for complete documentation, architecture diagrams, and examples.
Run the development server:
npm run devBuild the app for production:
npm run buildThen run the app in production mode:
npm start- Frontend: React, Tailwind CSS, Framer Motion
- Backend: Remix, Node.js
- Database: Supabase
- AI Integration: OpenAI API
- Vector Storage: Supabase Vector Extension
EchoSelf includes NanEcho, a GPT-2-based transformer model trained specifically on Deep Tree Echo cognitive architecture patterns. The system supports continuous incremental training and deployment to HuggingFace Hub.
netrain-cached.yml: Incremental training with checkpoint caching (every 6 hours)netrain.yml: Full training with relentless persona reinforcement (every 4 hours)- Checkpoint Guardian: Multi-location backup system ensures training progress is never lost
Deploy trained models to HuggingFace Hub for sharing and version control:
# Deploy to HuggingFace
gh workflow run deploy-huggingface.yml \
-f source_workflow=netrain-cached \
-f training_type=full \
-f create_release=true
# Train from HuggingFace model
gh workflow run netrain-cached.yml \
-f download_from_hf=true \
-f hf_repo_id=9cog/echoself-nanechoFeatures:
- β Automatic model conversion to HuggingFace GPT-2 format
- β Dataset upload alongside models
- β Comprehensive model cards with training metadata
- β Download models for incremental training
- β Continuous improvement cycle: train β deploy β download β train
Setup:
- Create HuggingFace token at https://huggingface.co/settings/tokens
- Add token as GitHub secret
HFESELF - Create model repository at https://huggingface.co/new
See NanEcho/HUGGINGFACE_README.md for complete documentation.
EchoSelf implements a "forever" automated solution for recurring TypeScript errors and dependency chaos, embodying distributed cognition between the codebase and CI/CD systems. This recursive, self-healing approach enables the codebase to co-evolve with automated tooling, requiring human intervention only for novel or ambiguous cases.
The CI system automatically handles routine maintenance through comprehensive automation:
- Deno Lint: Runs
deno lint --fixfor Deno/TypeScript code quality - ESLint: Applies
eslint --fixfor JavaScript/TypeScript linting - Prettier: Executes
prettier --writefor consistent code formatting - Auto-commits: Automatically commits fixable changes with descriptive messages
- Scheduled runs: Nightly maintenance at 2 AM UTC for continuous improvement
- Security auditing: Regular
npm auditscans for vulnerabilities - Unused dependency detection: Identifies and logs dependencies not referenced in codebase
- Freshness tracking: Monitors dependencies not updated for 6+ months
- Automated cleanup: Removes stale or unnecessary dependencies
- Change logging: All dependency modifications logged to
.maintenance-logs/for transparency
The automation embodies distributed cognition principles:
- Autonomous Operation: Routine fixes applied without human intervention
- Intelligent Escalation: Creates GitHub issues when manual intervention required
- Learning System: Logs all changes for pattern analysis and future automation improvements
- Cognitive Transparency: Comprehensive logging ensures maintainers understand all automated changes
All automated actions are logged in .maintenance-logs/:
latest-report.md- Most recent maintenance summarydependency-audit.md- Latest dependency analysis- Historical logs with timestamps for trend analysis
The system automatically creates GitHub issues labeled automated-maintenance and needs-manual-intervention when:
- TypeScript errors cannot be auto-fixed
- ESLint rules require manual code changes
- Dependency conflicts need human decision-making
- Novel error patterns emerge that the automation cannot handle
The automation runs on:
- Every push/PR: Quick quality checks and auto-fixes
- Nightly schedule: Full dependency audit and maintenance
- Manual dispatch: Force full audit via GitHub Actions interface
To disable automation temporarily, modify .github/workflows/automated-quality.yml or use workflow settings in GitHub.
To add new automated fixes:
- Add new tooling commands to the workflow
- Ensure proper error handling with
continue-on-error: true - Update the maintenance logging to capture new tool outputs
- Test manual intervention scenarios
This system ensures EchoSelf maintains high code quality while minimizing cognitive load on human maintainers, allowing focus on creative and strategic development rather than routine maintenance tasks.
This repository includes tools for monitoring and managing disk space, particularly useful in CI/CD environments like GitHub Actions where runners have pre-installed SDKs and tools that may not be needed.
Run the disk space analysis script to identify space-consuming directories:
bash scripts/analyze_disk_space.shThis generates a comprehensive report showing:
- Overall disk usage
- Top-level directory breakdown
- /usr, /opt, and /var directory analysis
- Identification of major space consumers (Android SDK, .NET, Haskell, Swift, etc.)
To free up disk space by removing common pre-installed tools:
bash scripts/cleanup_disk_space.shDefault removals include:
- Android SDK (~12G)
- Haskell toolchain (~6.4G)
- .NET SDK (~4G)
- Swift toolchain (~3.2G)
- Hosted toolcache (~5.8G)
Use the Disk Space Management workflow for on-demand analysis and cleanup:
- Go to Actions β Disk Space Management
- Click "Run workflow"
- Select options:
- Action: analyze, cleanup, or analyze-and-cleanup
- Selective removal: Choose which SDKs to remove
- Review the results in workflow artifacts
Add disk space cleanup at the start of workflows that need more space:
steps:
- name: Free Disk Space
run: |
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/share/swift
df -hFor more details, see DISK_SPACE_ANALYSIS.md.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
Copy .env.example to .env and fill in your credentials:
cp .env.example .envSUPABASE_URLandSUPABASE_ANON_KEYare requiredOPENAI_API_KEYis optional (enables embeddings and AI chat)
Run the migration to create the memories table and the match_memories RPC (requires pgvector):
-- See file: supabase/migrations/20250101_memories_and_match.sqlApply it in your Supabase project (SQL Editor) or via CLI.
Local vector search via hnswlib-node is now lazy-loaded at runtime. If the native module is unavailable in your environment, the app will continue running and fall back to Supabase-only vector search.