Skip to content

RPirruccio/greenlights-ai

Repository files navigation

Greenlights AI 🚦

"I want a private LLM where I can upload my books, my journals, my favorite articles... and basically learn more about myself." - Matthew McConaughey on JRE

McConaughey described the problem. We built the solution - and it goes way beyond what he imagined.

What This Really Is: A Metacognitive Mirror for Your Mind

Greenlights AI enhances human cognition by creating an AI that watches your thinking patterns and helps you avoid cognitive failures. It's not about making AI smarter - it's about making YOU smarter through AI.

Your books and influences become the training data. The AI learns to spot when you're falling into mental traps and applies fixes from the thinkers you admire. Real-time cognitive enhancement through conversation.

The killer feature: Voice mode. Think out loud, get instant feedback, compound your learning.

A Note on Privacy

  • Voice processing: Can be fully private with local TTS/STT on your hardware
  • Intelligence layer: Currently requires Claude Code (cloud-based)
  • Reality: We're years away from open source models matching Claude/GPT-5 capabilities
  • Trade-off: You get cognitive enhancement today, but the AI intelligence isn't local yet

Prerequisites

  • Claude Code - Anthropic's AI coding assistant (Get it here)
  • Basic text editor to customize markdown files
  • (Optional) Docker + NVIDIA GPU for voice mode setup

Quick Start (5 minutes)

Step 1: Get Claude Code

If you don't have Claude Code yet, sign up at claude.ai/code

Step 2: Clone This Repository

git clone https://github.com/yourusername/greenlights-ai.git
cd greenlights-ai

Step 3: Configure Claude Code

Tell Claude Code to use these files as its instructions:

  1. Open Claude Code settings
  2. Set this folder as your project directory
  3. The CLAUDE.md file will automatically be loaded as instructions

Step 4: Make It Yours

Edit 01-cognitive-stack.md to add your own thinkers and books. The default includes Goggins, Marcus Aurelius, and others, but you can replace them with anyone whose thinking patterns you admire.

Step 5: Test It Out

Ask Claude the same question before and after. Watch how the thinking changes.

The Science: Everyone's Brain Has the Same Bugs

Cognitive science has documented universal thinking failures:

  • Confirmation Bias - Only seeing evidence that confirms your beliefs
  • Analysis Paralysis - Getting lost in complexity
  • Premature Closure - Stopping at the first answer
  • Emotional Hijacking - Feelings overriding logic
  • And 16+ more documented failures

These aren't opinions or character flaws. They're how human cognition fails, backed by decades of research.

The Vision: Where We're Going

We believe the metacognitive mirror can enhance human cognition through AI. Here's the path:

Phase 1: Foundation (Building with Claude Code NOW)

  • Claude Code as the tool-building playground
  • Developing atomic capabilities and tools
  • Testing cognitive patterns from books
  • Voice mode experiments
  • What you do: Experiment, build tools, test ideas
  • Status: Active development environment available

Phase 2: Agent System (LangGraph Implementation)

  • Orchestrate tools into intelligent agents
  • Implement CRLC cycle (Capture, Reflect, Learn, Create)
  • Add feedback loops and evaluation
  • What happens: Agents start working together
  • Status: Architecture defined, needs implementation

Phase 3: Autonomous Enhancement (The Breakthrough)

  • Fine-tuned system that predicts and prevents failures
  • Continuous improvement through feedback
  • Minimal human intervention needed
  • What you get: Cognitive enhancement on autopilot
  • Status: Vision documented

Current Reality: This is a research project with a working experimental framework. We're building toward Phase 3 but starting with Claude Code as our prototype environment.

Repository Structure

greenlights-ai/
β”œβ”€β”€ README.md                    # This file - start here
β”œβ”€β”€ CLAUDE.md                    # Main orchestrator - copy to Claude Code root
β”œβ”€β”€ 01-cognitive-stack.md        # Your layered thinking patterns
β”œβ”€β”€ 02-ai-communication.md       # How your AI should talk to you
β”œβ”€β”€ 03-books.md                  # Your book collection for cognitive extraction
β”œβ”€β”€ 04-claude-code-documentation.md # Advanced: hooks & automation
β”œβ”€β”€ roadmap/                     # Development roadmap (in progress)
β”‚   β”œβ”€β”€ README.md                # Roadmap overview and status
β”‚   β”œβ”€β”€ 00-minimal-viable-abstraction-of-cognition.md
β”‚   └── ...                      # Vision and architecture docs
β”œβ”€β”€ tools/                       # Experimental cognitive tools
β”œβ”€β”€ journal/                     # Example journal entries
β”œβ”€β”€ .claude/                     # Claude Code settings
└── .mcp.json                    # MCP server configuration

Why You Stay in Control

  • You choose the books/thinkers - Your influences, not ours
  • You accept or reject suggestions - AI proposes, human decides
  • You see the reasoning - Full transparency in decision-making
  • You can modify everything - Open source, hackable, yours

What You Can Do TODAY

This is an experimental framework for cognitive enhancement research:

  • Experiment with Claude Code - Use hooks, subagents, and automation
  • Test voice mode conversations - Natural interaction with your cognitive patterns
  • Define your own cognitive stack - Map books to cognitive failures
  • Join the research - Help define capabilities for Phase 2 and 3

Be clear: The full metacognitive mirror doesn't exist yet. This is the playground where we're building it.

Advanced Features

Voice Mode (Game Changer) πŸŽ™οΈ

This is the killer feature - Talk to your cognitive OS naturally:

  • Have actual conversations with Marcus Aurelius' thinking patterns
  • Switch between Goggins motivation and Stoic calm mid-conversation
  • Journal insights without typing
  • Process problems out loud with different cognitive architectures

Setup Options:

Claude Code Integration

  • Custom hooks for automated workflows
  • Subagents for specialized thinking
  • MCP servers for extended capabilities

Community Contributions

Fork this repo and share your extracted architectures. Help others think better.

The Philosophy: Enhance Human Cognition, Don't Replace It

Core Principles

  1. Augment, Don't Automate - The goal is smarter humans, not smarter AI
  2. Address Failures, Not Prescribe Success - Fix what's broken, don't dictate what's right
  3. Maintain Human Agency - Every decision remains yours
  4. Compound Learning - Each interaction makes both human and AI better
  5. Radical Transparency - See exactly how and why the system works

Why This Matters Now

We're at a critical juncture where AI can either replace human thinking or enhance it. Greenlights AI firmly chooses enhancement. Your cognitive failures are universal and documented. The fixes come from the thinkers YOU choose. The system helps you think better, not think for you.

The Recursive Magic

When you enhance your cognition through AI, you create better outputs. Those better outputs train the AI to higher standards. The improved AI helps you think even better. It's a virtuous cycle of cognitive enhancement that compounds over time.

Your books are just the beginning. The real goldmine is your enhanced cognition.

Credits & Acknowledgments

  • Mike Bailey - The absolute MVP who built VoiceMode MCP and made natural voice conversations with Claude possible. Without Mike, we'd still be typing.
  • Anthropic - For Claude and the Model Context Protocol
  • OpenAI - For Whisper (speech-to-text that actually works)
  • Kokoro Team - For the amazing open source TTS model
  • Matthew McConaughey - For articulating the vision on JRE

Contributing

Share your cognitive architecture extractions! Submit PRs with:

  • New thinkers/books you've extracted
  • Improved extraction methodologies
  • Integration guides for other LLMs
  • Success stories and use cases

License

MIT - Because good thinking patterns should be free.


Stop letting cognitive failures control your life. Start catching them before they catch you.

Greenlights AI: Where human cognition meets AI enhancement.

An open source cognitive enhancement system from the AI community

About

Cognitive enhancement through AI - A metacognitive mirror for reducing thinking failures in real-time

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages