Skip to content

Quantum3-Labs/stacks-builder

Repository files navigation

Stacks Builder

Stacks Builder is an MCP-enabled RAG system that enhances Clarity smart contract coding for Cursor/VS Code. It ingests official Clarity docs and sample projects into ChromaDB, retrieves the most relevant context, and uses Gemini to produce accurate answers and code.

✨ Features

  • 🔍 Smart Context Retrieval - Search through 15+ Clarity code samples and official documentation
  • 🤖 AI Code Generation - Generate Clarity code with LLM assistance (Gemini/OpenAI/Claude)
  • RAG-Powered - Combines vector similarity search with intelligent code generation
  • 🎯 IDE Integration - Works seamlessly with Cursor, Claude Desktop, and MCP-compatible editors
  • 🔒 User Authentication - Secure API key management for multi-user environments
  • 🌐 Production Ready - Hosted backend available at https://stacks-builder.q3labs.io

🚀 Quick Start (Production)

Get up and running in 3 minutes using our hosted backend.

Step 1: Get an API Key

Visit our Swagger UI to register and generate your API key:

  1. Open: https://stacks-builder.q3labs.io/swagger/index.html
  2. Register via /api/v1/auth/register endpoint
  3. Login via /api/v1/auth/login endpoint
  4. Generate your API key from Post /api/v1/keys Create API key endpoint
  5. Save your API key - you'll need it in the next step

Step 2: Configure MCP Server in Cursor

Add this configuration to your Cursor MCP settings file (~/.cursor/mcp.json):

{
  "mcpServers": {
    "stacks-builder": {
      "command": "npx",
      "args": [
        "-y",
        "@q3labs/stacks-builder"
      ],
      "env": {
        "API_KEY": "your-api-key-here",
        "BACKEND_URL": "https://stacks-builder.q3labs.io"
      }
    }
  }
}

Replace your-api-key-here with the API key from Step 1.

Step 3: Add Stacks Development Rule

Go to Cursor SettingsRules, Memories, Commands and create a new User Rules so it applies across all your projects:

Stacks Coding Rules

These rules define how Stacks and Clarity development should be handled across projects. All Stacks- or Clarity-related requests must begin with a call to the Stacks Builder MCP.

Use the MCP tools in this order:
1. get_clarity_context – retrieve context, analyze existing code, and understand behavior.
2. generate_clarity_code – create or modify code.

Each MCP query must be atomic and focus on a single topic. If a request involves multiple topics, make separate MCP calls.

All answers must rely on MCP output without assumptions or speculation.

Step 4: Restart Cursor and video demo

Completely restart Cursor (not just reload) for the changes to take effect if needed.

Video demo showing setup process from step 1 to 3.

Available MCP Tools

Once configured, you'll have access to:

  1. get_clarity_context - Retrieves relevant Clarity code snippets and documentation

    • Search through curated examples and official docs
    • Get contextual code samples for your queries
  2. generate_clarity_code - Generates complete Clarity code

    • AI-powered code generation using RAG context
    • Supports custom temperature and token limits

Example simple queries:

  • "How do I create a data variable in Clarity?"
  • "Show me examples of using maps in Clarity"

Example advanced queries:

"Create a Clarity project using Clarinet tool to build the following: Create a token with name CQT. Should transfer, mint (only the owner can mint)

Rules:

  • Must meet all requirements above
  • Handle any possible errors
  • Adhere to the clarinet/stacks documentation for best practice/syntax
  • Can apply the changes in this current project Use stacks-builder MCP server to help you with development"

🔧 Troubleshooting

MCP Tools Not Showing

If tools don't appear after restarting Cursor, try global installation:

npm install -g @q3labs/stacks-builder

Update config to use the global command:

{
  "mcpServers": {
    "stacks-builder": {
      "command": "stacks-builder",
      "args": [],
      "env": {
        "API_KEY": "your-api-key-here",
        "BACKEND_URL": "https://stacks-builder.q3labs.io"
      }
    }
  }
}

Node.js Version

The MCP server requires Node.js 22+. Check your version:

node --version

💡 How It Works

┌─────────────┐
│   Cursor    │  Your IDE with MCP support
└──────┬──────┘
       │ MCP Protocol
┌──────▼───────────────┐
│@q3labs/stacks-builder│  MCP Server
└──────┬───────────────┘
       │ HTTPS/REST API
┌──────▼──────────────┐
│  Backend Server     │  Go API + Python RAG Pipeline
│  ChromaDB Store     │  Vector embeddings + LLM
└─────────────────────┘

Workflow:

  1. Query - You ask a question about Clarity in Cursor
  2. Context Retrieval - MCP server searches ChromaDB for relevant code samples
  3. LLM Generation - Retrieved context is combined with your prompt via LLM
  4. Response - Smart, context-aware code suggestions returned to your IDE

RAG Pipeline

Clarity_RAG

MCP Server Overview

Stacks Builder is built around a Model Context Protocol (MCP) server that streams Clarity-specific context directly into your IDE. The service:

  • Serves Clarity knowledge over MCP protocol
  • Retrieves embeddings from ChromaDB populated with documentation and sample projects
  • Orchestrates LLM providers (Gemini/OpenAI/Claude) with retrieved snippets for smart code generation

🛠️ Local Setup

Want to run your own backend? Follow these instructions to set up the full stack locally.

Prerequisites

Required Software

  • Node.js 22+ - MCP server (for production usage via npm)
  • Go 1.24+ - Backend API server (for local setup)
  • Python 3.11+ - RAG pipeline and embedding generation
  • Docker & Docker Compose - Containerized deployment (recommended)
  • Make - Build automation

API Keys

You'll need at least one LLM provider API key:

  • Google Gemini (recommended)
  • OpenAI (alternative)
  • Claude (alternative)

System Requirements

  • ~10GB of free storage for the full dataset and embeddings

Setup Steps

1. Backend Setup

Clone the repository and set up the backend:

git clone https://github.com/Quantum3-Labs/stacks-builder.git
cd stacks-builder/backend

Create your environment file:

cp .env.example .env

Edit .env and configure:

  • Your LLM API key (Gemini/OpenAI/Claude - choose one)
  • Database settings
  • PUBLIC_BACKEND_URL (use http://localhost:8080 for local)

Important: Only set one LLM provider and its key at a time.

Start the backend:

make up

The backend will be available at http://localhost:8080.

2. Data Ingestion Setup

To populate the RAG system with Clarity documentation and code samples, you need to ingest the data into ChromaDB. This step is essential for the system to provide relevant context.

Setup virtual environment and ingest data:

# Navigate to backend directory
cd backend

# Create Python virtual environment
make venv

# Activate the virtual environment
source venv/bin/activate

# Install all dependencies (Python + Go)
make install

# Clone repositories and ingest data into ChromaDB
make setup

Optional: Add custom repositories

If you want to include additional Clarity repositories in your knowledge base, you can add them to the repository list before running setup:

  1. Edit backend/scripts/clone_repos.py
  2. Add your repository URLs to the REPO_URLS list:
REPO_URLS = [
    # ... existing repositories ...
    "https://github.com/your-username/your-clarity-repo.git",
    "https://github.com/another-org/clarity-project.git",
]
  1. Then run the setup process as normal with make setup

This process will:

  1. Create a Python virtual environment
  2. Install Python dependencies for data processing
  3. Install Go dependencies for the backend
  4. Clone official Clarity documentation and 15+ sample repositories
  5. Process and embed all content into ChromaDB for vector search

3. Generate API Key

Once the backend is running:

  1. Open: http://localhost:8080/swagger/index.html
  2. Register via /api/v1/auth/register
  3. Login via /api/v1/auth/login
  4. Generate your API key from /api/v1/keys

4. Configure MCP Server

Update your ~/.cursor/mcp.json to point to your local backend:

{
  "mcpServers": {
    "stacks-builder": {
      "command": "npx",
      "args": ["-y", "@q3labs/stacks-builder"],
      "env": {
        "API_KEY": "your-api-key-here",
        "BACKEND_URL": "http://localhost:8080"
      }
    }
  }
}

5. Add Stacks Development Rule

Go to Cursor SettingsRules, Memories, Commands and create a new User Rules so it applies across all your projects:

Stacks Coding Rules

These rules define how Stacks and Clarity development should be handled across projects. All Stacks- or Clarity-related requests must begin with a call to the Stacks Builder MCP.

Use the MCP tools in this order:
1. get_clarity_context – retrieve context, analyze existing code, and understand behavior.
2. generate_clarity_code – create or modify code.

Each MCP query must be atomic and focus on a single topic. If a request involves multiple topics, make separate MCP calls.

All answers must rely on MCP output without assumptions or speculation.

Restart Cursor completely.

Development Mode

For active development with live reload:

cd backend

# Use development environment
cp .env.dev.example .env.dev
# Edit .env.dev and add your API keys

# Start with live reload (uses Air)
make dev

# View logs
make dev-logs

# Stop
make dev-down

Development features:

  • Automatic rebuild on code changes using Air
  • Debug mode with verbose logging
  • Source code mounted as volume for instant changes
  • Swagger docs auto-generated on every build

See backend/Makefile for all development commands (make dev-*).

Using Local MCP Server Development Version

For MCP server development:

cd mcp_server
npm install
npm run build

Update ~/.cursor/mcp.json to use local files:

{
  "mcpServers": {
    "stacks-builder": {
      "command": "node",
      "args": ["/absolute/path/to/stacks-builder/mcp_server/dist/index.js"],
      "env": {
        "API_KEY": "your-api-key-here",
        "BACKEND_URL": "http://localhost:8080"
      }
    }
  }
}

📡 Optional Interfaces

Chat Completion API

You can also use the backend directly via REST API:

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -d '{
    "messages": [
      {"role": "user", "content": "How do I write a counter contract in Clarity?"}
    ]
  }'

With optional parameters:

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -d '{
    "model": "gemini-2.5-flash",
    "messages": [
      {"role": "user", "content": "How do I write a counter contract in Clarity?"}
    ],
    "temperature": 0.7,
    "max_tokens": 2000,
    "conversation_id": 123
  }'

🗄️ Database Configuration

The backend uses SQLite for user management, API keys, job tracking, and query analytics.

DATABASE_PATH Environment Variable

  • Default: ./data/clarity_coder.db
  • Production/Docker: /app/data/clarity_coder.db
  • Local Development: ./data/clarity_coder.db

Configure in your .env file:

DATABASE_PATH=./data/clarity_coder.db

The database file is automatically created on first run if it doesn't exist. All database tables and indices are created via automatic migrations.

Query Logs Schema

The query_logs table tracks all API requests for analytics, debugging, and token usage monitoring.

Column Type Description
id INTEGER Primary key, auto-increment
user_id INTEGER Foreign key to users table (required)
api_key_id INTEGER Foreign key to api_keys table (nullable)
endpoint TEXT API endpoint path (e.g., /v1/chat/completions)
query TEXT Request payload (truncated to 10KB)
response TEXT Response payload (truncated to 10KB)
model_provider TEXT LLM provider used (gemini, openai, claude)
rag_contexts_count INTEGER Number of RAG contexts retrieved (default: 0)
input_tokens INTEGER Tokens in prompt/input (default: 0)
output_tokens INTEGER Tokens in completion/output (default: 0)
latency_ms INTEGER Request latency in milliseconds (default: 0)
status TEXT Request status (success or error)
error_message TEXT Error details if status is error (nullable)
conversation_id INTEGER Foreign key to conversations table (nullable)
created_at TIMESTAMP Record creation timestamp (default: CURRENT_TIMESTAMP)

Indices:

  • idx_query_logs_user_id - Index on user_id for faster user-specific queries
  • idx_query_logs_created_at - Index on created_at for time-based queries
  • idx_query_logs_endpoint - Index on endpoint for endpoint-specific analytics

Token Counting:

Token counts (input_tokens, output_tokens) are populated using native token counting APIs from each LLM provider:

  • Gemini: Uses CountTokens() method from the Google GenAI SDK
  • OpenAI: Extracted from response usage.prompt_tokens and usage.completion_tokens
  • Claude: Extracted from response usage.input_tokens and usage.output_tokens

🔗 Integrations

Stacks Builder can be integrated with various Clarity development tools and templates to enhance your smart contract development workflow with RAG-powered context.

📚 Documentation

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📄 License

MIT License - see LICENSE file for details.


Built with ❤️ by Quantum3 Labs for the Stacks blockchain ecosystem.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors