Stacks Builder is an MCP-enabled RAG system that enhances Clarity smart contract coding for Cursor/VS Code. It ingests official Clarity docs and sample projects into ChromaDB, retrieves the most relevant context, and uses Gemini to produce accurate answers and code.
- 🔍 Smart Context Retrieval - Search through 15+ Clarity code samples and official documentation
- 🤖 AI Code Generation - Generate Clarity code with LLM assistance (Gemini/OpenAI/Claude)
- ⚡ RAG-Powered - Combines vector similarity search with intelligent code generation
- 🎯 IDE Integration - Works seamlessly with Cursor, Claude Desktop, and MCP-compatible editors
- 🔒 User Authentication - Secure API key management for multi-user environments
- 🌐 Production Ready - Hosted backend available at
https://stacks-builder.q3labs.io
Get up and running in 3 minutes using our hosted backend.
Visit our Swagger UI to register and generate your API key:
- Open: https://stacks-builder.q3labs.io/swagger/index.html
- Register via
/api/v1/auth/registerendpoint - Login via
/api/v1/auth/loginendpoint - Generate your API key from Post
/api/v1/keysCreate API key endpoint - Save your API key - you'll need it in the next step
Add this configuration to your Cursor MCP settings file (~/.cursor/mcp.json):
{
"mcpServers": {
"stacks-builder": {
"command": "npx",
"args": [
"-y",
"@q3labs/stacks-builder"
],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "https://stacks-builder.q3labs.io"
}
}
}
}Replace your-api-key-here with the API key from Step 1.
Go to Cursor Settings → Rules, Memories, Commands and create a new User Rules so it applies across all your projects:
Stacks Coding Rules
These rules define how Stacks and Clarity development should be handled across projects. All Stacks- or Clarity-related requests must begin with a call to the Stacks Builder MCP.
Use the MCP tools in this order:
1. get_clarity_context – retrieve context, analyze existing code, and understand behavior.
2. generate_clarity_code – create or modify code.
Each MCP query must be atomic and focus on a single topic. If a request involves multiple topics, make separate MCP calls.
All answers must rely on MCP output without assumptions or speculation.
Completely restart Cursor (not just reload) for the changes to take effect if needed.
Video demo showing setup process from step 1 to 3.
Once configured, you'll have access to:
-
get_clarity_context- Retrieves relevant Clarity code snippets and documentation- Search through curated examples and official docs
- Get contextual code samples for your queries
-
generate_clarity_code- Generates complete Clarity code- AI-powered code generation using RAG context
- Supports custom temperature and token limits
Example simple queries:
- "How do I create a data variable in Clarity?"
- "Show me examples of using maps in Clarity"
Example advanced queries:
"Create a Clarity project using Clarinet tool to build the following: Create a token with name CQT. Should transfer, mint (only the owner can mint)
Rules:
- Must meet all requirements above
- Handle any possible errors
- Adhere to the clarinet/stacks documentation for best practice/syntax
- Can apply the changes in this current project Use stacks-builder MCP server to help you with development"
If tools don't appear after restarting Cursor, try global installation:
npm install -g @q3labs/stacks-builderUpdate config to use the global command:
{
"mcpServers": {
"stacks-builder": {
"command": "stacks-builder",
"args": [],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "https://stacks-builder.q3labs.io"
}
}
}
}The MCP server requires Node.js 22+. Check your version:
node --version┌─────────────┐
│ Cursor │ Your IDE with MCP support
└──────┬──────┘
│ MCP Protocol
┌──────▼───────────────┐
│@q3labs/stacks-builder│ MCP Server
└──────┬───────────────┘
│ HTTPS/REST API
┌──────▼──────────────┐
│ Backend Server │ Go API + Python RAG Pipeline
│ ChromaDB Store │ Vector embeddings + LLM
└─────────────────────┘
Workflow:
- Query - You ask a question about Clarity in Cursor
- Context Retrieval - MCP server searches ChromaDB for relevant code samples
- LLM Generation - Retrieved context is combined with your prompt via LLM
- Response - Smart, context-aware code suggestions returned to your IDE
Stacks Builder is built around a Model Context Protocol (MCP) server that streams Clarity-specific context directly into your IDE. The service:
- Serves Clarity knowledge over MCP protocol
- Retrieves embeddings from ChromaDB populated with documentation and sample projects
- Orchestrates LLM providers (Gemini/OpenAI/Claude) with retrieved snippets for smart code generation
Want to run your own backend? Follow these instructions to set up the full stack locally.
- Node.js 22+ - MCP server (for production usage via npm)
- Go 1.24+ - Backend API server (for local setup)
- Python 3.11+ - RAG pipeline and embedding generation
- Docker & Docker Compose - Containerized deployment (recommended)
- Make - Build automation
You'll need at least one LLM provider API key:
- Google Gemini (recommended)
- OpenAI (alternative)
- Claude (alternative)
- ~10GB of free storage for the full dataset and embeddings
Clone the repository and set up the backend:
git clone https://github.com/Quantum3-Labs/stacks-builder.git
cd stacks-builder/backendCreate your environment file:
cp .env.example .envEdit .env and configure:
- Your LLM API key (Gemini/OpenAI/Claude - choose one)
- Database settings
PUBLIC_BACKEND_URL(usehttp://localhost:8080for local)
Important: Only set one LLM provider and its key at a time.
Start the backend:
make upThe backend will be available at http://localhost:8080.
To populate the RAG system with Clarity documentation and code samples, you need to ingest the data into ChromaDB. This step is essential for the system to provide relevant context.
Setup virtual environment and ingest data:
# Navigate to backend directory
cd backend
# Create Python virtual environment
make venv
# Activate the virtual environment
source venv/bin/activate
# Install all dependencies (Python + Go)
make install
# Clone repositories and ingest data into ChromaDB
make setupOptional: Add custom repositories
If you want to include additional Clarity repositories in your knowledge base, you can add them to the repository list before running setup:
- Edit
backend/scripts/clone_repos.py - Add your repository URLs to the
REPO_URLSlist:
REPO_URLS = [
# ... existing repositories ...
"https://github.com/your-username/your-clarity-repo.git",
"https://github.com/another-org/clarity-project.git",
]- Then run the setup process as normal with
make setup
This process will:
- Create a Python virtual environment
- Install Python dependencies for data processing
- Install Go dependencies for the backend
- Clone official Clarity documentation and 15+ sample repositories
- Process and embed all content into ChromaDB for vector search
Once the backend is running:
- Open: http://localhost:8080/swagger/index.html
- Register via
/api/v1/auth/register - Login via
/api/v1/auth/login - Generate your API key from
/api/v1/keys
Update your ~/.cursor/mcp.json to point to your local backend:
{
"mcpServers": {
"stacks-builder": {
"command": "npx",
"args": ["-y", "@q3labs/stacks-builder"],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "http://localhost:8080"
}
}
}
}Go to Cursor Settings → Rules, Memories, Commands and create a new User Rules so it applies across all your projects:
Stacks Coding Rules
These rules define how Stacks and Clarity development should be handled across projects. All Stacks- or Clarity-related requests must begin with a call to the Stacks Builder MCP.
Use the MCP tools in this order:
1. get_clarity_context – retrieve context, analyze existing code, and understand behavior.
2. generate_clarity_code – create or modify code.
Each MCP query must be atomic and focus on a single topic. If a request involves multiple topics, make separate MCP calls.
All answers must rely on MCP output without assumptions or speculation.
Restart Cursor completely.
For active development with live reload:
cd backend
# Use development environment
cp .env.dev.example .env.dev
# Edit .env.dev and add your API keys
# Start with live reload (uses Air)
make dev
# View logs
make dev-logs
# Stop
make dev-downDevelopment features:
- Automatic rebuild on code changes using Air
- Debug mode with verbose logging
- Source code mounted as volume for instant changes
- Swagger docs auto-generated on every build
See backend/Makefile for all development commands (make dev-*).
For MCP server development:
cd mcp_server
npm install
npm run buildUpdate ~/.cursor/mcp.json to use local files:
{
"mcpServers": {
"stacks-builder": {
"command": "node",
"args": ["/absolute/path/to/stacks-builder/mcp_server/dist/index.js"],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "http://localhost:8080"
}
}
}
}You can also use the backend directly via REST API:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"messages": [
{"role": "user", "content": "How do I write a counter contract in Clarity?"}
]
}'With optional parameters:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"messages": [
{"role": "user", "content": "How do I write a counter contract in Clarity?"}
],
"temperature": 0.7,
"max_tokens": 2000,
"conversation_id": 123
}'The backend uses SQLite for user management, API keys, job tracking, and query analytics.
- Default:
./data/clarity_coder.db - Production/Docker:
/app/data/clarity_coder.db - Local Development:
./data/clarity_coder.db
Configure in your .env file:
DATABASE_PATH=./data/clarity_coder.dbThe database file is automatically created on first run if it doesn't exist. All database tables and indices are created via automatic migrations.
The query_logs table tracks all API requests for analytics, debugging, and token usage monitoring.
| Column | Type | Description |
|---|---|---|
id |
INTEGER | Primary key, auto-increment |
user_id |
INTEGER | Foreign key to users table (required) |
api_key_id |
INTEGER | Foreign key to api_keys table (nullable) |
endpoint |
TEXT | API endpoint path (e.g., /v1/chat/completions) |
query |
TEXT | Request payload (truncated to 10KB) |
response |
TEXT | Response payload (truncated to 10KB) |
model_provider |
TEXT | LLM provider used (gemini, openai, claude) |
rag_contexts_count |
INTEGER | Number of RAG contexts retrieved (default: 0) |
input_tokens |
INTEGER | Tokens in prompt/input (default: 0) |
output_tokens |
INTEGER | Tokens in completion/output (default: 0) |
latency_ms |
INTEGER | Request latency in milliseconds (default: 0) |
status |
TEXT | Request status (success or error) |
error_message |
TEXT | Error details if status is error (nullable) |
conversation_id |
INTEGER | Foreign key to conversations table (nullable) |
created_at |
TIMESTAMP | Record creation timestamp (default: CURRENT_TIMESTAMP) |
Indices:
idx_query_logs_user_id- Index on user_id for faster user-specific queriesidx_query_logs_created_at- Index on created_at for time-based queriesidx_query_logs_endpoint- Index on endpoint for endpoint-specific analytics
Token Counting:
Token counts (input_tokens, output_tokens) are populated using native token counting APIs from each LLM provider:
- Gemini: Uses
CountTokens()method from the Google GenAI SDK - OpenAI: Extracted from response
usage.prompt_tokensandusage.completion_tokens - Claude: Extracted from response
usage.input_tokensandusage.output_tokens
Stacks Builder can be integrated with various Clarity development tools and templates to enhance your smart contract development workflow with RAG-powered context.
- High-Level Architecture: ARCHITECTURE_DIAGRAM.md
- RAG Pipeline Details: RAG_PIPELINE_DIAGRAM.md
- RAG Approach: RAG_APPROACH_DIAGRAM.md
- API Documentation: https://stacks-builder.q3labs.io/swagger/index.html
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details.
Built with ❤️ by Quantum3 Labs for the Stacks blockchain ecosystem.