This project is a technical demonstration developed specifically for the Agents League (Creative Apps) competition.
- Purpose: This repository is an academic and competitive submission. It is intended to showcase architectural patterns using GitHub Copilot SDK, and MCP.
- Non-Commercial: This is a non-commercial prototype. No services are being sold, and no unauthorized employment is being conducted through this project.
- License: Open-sourced under the MIT License for community educational purposes.
Creative Theme AI Generator is a multi-surface design assistant that helps users generate and refine UI color palettes using chat, then preview the result across realistic Website, Web App, Desktop App, and Mobile App mockups.
Flow:
Frontend -> Copilot Orchestrator (cloud reasoning) -> MCP tools (local/server) -> LLM provider (Gemini/Ollama/OpenAI)
Creative Apps (GitHub Copilot)
-
Serguei Castillo - @serguei9090 β Developer Collaborator (Participating strictly for educational/learning purposes. Explicitly listed as ineligible to receive any cash prizes or hackathon compensation).
-
Cinthya Rodriguez - @crhdez
Theme AI Generator is a creative productivity application for designers and developers who need fast palette exploration with practical UI context. Users chat with an AI assistant to generate a five-color system (primary, secondary, accent, background, text), then immediately preview the palette on enterprise-grade mock interfaces across website, analytics web app, desktop workspace, and mobile patterns.
The app is designed for quick experimentation and iterative refinement. Users can edit a single color manually (advanced picker + RGB/hex), regenerate all colors from mood prompts, or target one color using natural-language instructions. The backend is a local MCP-style server that routes generation to multiple providers (Ollama, OpenAI, Gemini, Copilot SDK) with deterministic fallback behavior for resilience.
The project focuses on creative speed, consistent output contracts, and practical presentation quality. Instead of abstract swatches only, it demonstrates how a palette behaves in real UI structures (navigation, cards, tables, pricing blocks, activity panels, editor panes, and mobile cards). This makes the output easier to evaluate for readability, hierarchy, and brand tone.
- Demo Video: YouTube Link
- Visual Gallery: See our Mockup & UI Gallery
- Copilot Evidence: Read the Prompt Highlights (Curated) or the Full Log.
For quick testing or to containerize the application, you can use Docker or our "mini" environment configuration.
If you have bun and ollama (running) installed, run this to start the app in one command:
bun install && cp .env.example.mini .env && cp .env apps/web/.env.local && bun run devTo test the project in a clean environment using Docker:
- Build the image:
docker build -t theme-ai-web . - Run the container:
Note: If connecting to a local Ollama instance on your host, ensure
docker run -p 3000:3000 --env-file .env.example.mini theme-ai-web
OLLAMA_URLis set tohttp://host.docker.internal:11434in your.env.
The repository is built as a Monorepo. Environment variables are managed at the root and shared with the web application.
bun install
# Initialize environment from template
cp .env.example .env
# Sync to web app (Required for Next.js/Node.js)
cp .env apps/web/.env.localIf you want the full graphical interface for generating and previewing palettes:
bun run devOpen http://localhost:3000
If you want to plug the color generator directly into your AI IDE (Copilot, Claude Desktop, etc.) without needing the web UI:
bun run mcp:devWe used GitHub Copilot (VS Code Agent/Edit/Ask modes) for approximately 70% of implementation, then continued manually after weekly quota limits.
Use the template below to paste your real Copilot prompts and outputs before submission:
| Area | Copilot Prompt / Action | Result / Impact |
|---|---|---|
| MCP routing | Refactor provider routing with fallback and strict palette validation |
Reduced provider-specific branching bugs |
| UI overhaul | Create enterprise preview components with reusable shadcn primitives |
Faster multi-file UI delivery |
| LLM Integration | Replace aistudio direct fetch with official @google/genai SDK |
Improved reliability and token handling |
| Orchestration | Implement dual-brain workflow with Copilot SDK as creative director |
High-quality reasoning vs precise tool execution |
We have deeply integrated the GitHub Copilot SDK as the primary "Creative Director" of the system.
- Intent Discovery: Before generating colors, the system uses the Copilot SDK to reason about the user's project and propose three distinct stylistic directions.
- Agentic Orchestration: The SDK manages the high-level conversation state, while the low-level palette math is delegated to specialized MCP tools.
- Seamless Fallback: If the SDK is unavailable, the system transparently falls back to local Ollama or raw Gemini/OpenAI providers.
The system doesn't just pass text to an LLM; it implements a robust reasoning loop to ensure technical quality and accessibility.
- Creative Intent: User asks for a "light neon palette."
- LLM Generation: The model proposes a bright yellow background with white text (low contrast).
- Validation Step: The
enforcePaletteAccessibilitylogic calculates the contrast ratio betweentextandbackground. - Loop/Correction: If the ratio is below 4.5:1 (WCAG AA), the system automatically triggers a correction, selecting the most readable text color (dark navy or white) to preserve the aesthetic while ensuring usability.
- Evidence: The user receives a notification: "Applied readability adjustment for text contrast."
This allows the "Creative Director" (Copilot) to focus on the mood, while the "Technical Engine" (MCP) ensures the design is actually functional.
- Multi-provider LLM routing with deterministic fallback in
llmService.ts - Official Gemini SDK integration for high-performance generation
- Dual-brain orchestration: Copilot SDK for discovery vs LLM Service for execution
- Strict palette output contract and lowercase hex normalization
- Centralized HTTP error classification and status mapping in
httpErrors.ts - Deployment-safe web API routes under
apps/web/src/app/api/mcp/* - Enterprise-grade preview system with shared mockup primitives
- Challenge: provider failures produced mixed error semantics.
Learning: map errors centrally and separate input, upstream, timeout, and server failure classes. - Challenge: palette demos were visually simple and not judge-friendly.
Learning: realistic product surfaces communicate color quality better than isolated swatches. - Challenge: localhost assumptions break remote demos.
Learning: default to same-origin proxy for backend access, allow override with env.
- Ephemeral chat by default (no automatic history persistence)
- Explicit
SaveandRestoresession actions via browser storage - Palette generation from mood/theme prompt
- Per-color editing:
- manual advanced control (hex + rgb + color picker)
- prompt-based single-color update
- full palette regeneration
- Live theme application across:
- website
- web app
- desktop app
- mobile app
- Provider routing with deterministic fallback:
ollamaopenaigemini(official SDK)copilot(via@github/copilot-sdk)
This project is structured as a monorepo to safely share AI logic while maintaining independent execution environments:
apps/web: Next.js UI (chat, controls, previews, web API routes). It uses the shared core for fast, serverless-friendly LLM generation without requiring a persistent separate MCP process.packages/mcp-server: The native Model Context Protocol (MCP) server. Used for integrating the palette generator natively into AI assistants like Claude Desktop or IDEs.packages/core: Shared models, LLM routing logic, prompts, and deterministic fallback functions used by both the Web App and the MCP Server.
The Native MCP server allows you to generate color systems directly inside your AI Assistant (Claude, VS Code, etc.) without needing the web UI.
If you exclusively want to use the MCP features:
- Clone & Install:
git clone ... && bun install - Configure:
cp packages/mcp-server/.env.example packages/mcp-server/.env - Run:
bun run mcp:dev(Starts the server @ port41234) - Verify:
bun run test:manual-mcp(Runs a diagnostic generation test)
There are three ways to run this server depending on your environment:
Best if you are actively editing the code or using a local clone.
{
"mcpServers": {
"theme-ai": {
"command": "bun",
"args": ["run", "C:/path/to/theme-ai-generator/packages/mcp-server/src/server.ts"],
"env": { "DEFAULT_PROVIDER": "ollama" }
}
}
}You can compile the server into a single standalone executable.
- Build it:
bun run compile:mcp(Generatesmcp-server.exeon Windows) - Use it:
{
"mcpServers": {
"theme-ai": {
"command": "C:/path/to/theme-ai-generator/mcp-server.exe",
"env": { "DEFAULT_PROVIDER": "ollama" }
}
}
}If the project is public, you can reach the logic via the published link. Note: Due to our Monorepo structure, we recommend installing the package once via bun to cache dependencies.
bun install -g github:serguei9090/theme-ai-generator/packages/mcp-serverThen use:
{
"mcpServers": {
"theme-ai": {
"command": "theme-ai-mcp",
"env": { "DEFAULT_PROVIDER": "ollama" }
}
}
}Add this to your custom MCP settings for a tailored local experience:
{
"mcpServers": {
"theme-ai": {
"command": "bun",
"args": ["run", "C:/path/to/theme-ai-generator/packages/mcp-server/src/server.ts"],
"env": {
"DEFAULT_PROVIDER": "ollama",
"OLLAMA_URL": "http://localhost:11434"
}
}
}
}Note: You can pass environment variables directly in the JSON config to override your .env file.
Whether running via Web API (apps/web/...) or via MCP Tools (packages/mcp-server/...), the system exposes the same capabilities:
generate_theme_palette(returns palette JSON)tweak_color(returns tweaked palette JSON)discover_theme_styles(drafts 3 creative directions)
The standard output format for a palette object across the system is normalized:
{
"primary": "#rrggbb",
"secondary": "#rrggbb",
"accent": "#rrggbb",
"background": "#rrggbb",
"text": "#rrggbb"
}- All provider keys (
GEMINI_API_KEY,OPENAI_API_KEY, etc.) should be placed in.envat the root orapps/web/.env.local. - Default provider and models can be configured globally.
NEXT_PUBLIC_MCP_URL(optional):- If set, frontend calls this URL directly.
- If not set, frontend uses internal API routes (
/api/mcp/*).
Run these from the root of the monorepo:
- Web App:
bun run dev(Hot-reloading @ port 3000) - MCP Server:
bun run mcp:dev(Watch mode @ port 41234) - Lint:
bun run lint
For performance-critical environments or final demos:
- Build & Optimise:
bun run build
- Launch Production Web:
bun run start
- Launch Production MCP:
bun run mcp
- Typecheck:
bun run typecheck - Run All Tests:
bun run test - Manual MCP Diagnostic:
bun run test:manual-mcp(Generates a sample palette via terminal)
If the MCP server fails to start because the address is in use:
- Identify the process:
netstat -ano | findstr :41234 - Kill it:
taskkill /F /PID <PID_NUMBER>
Ensure Ollama is running and accessible. Test it with:
curl http://localhost:11434/api/tags
If you get a connection error, Ollama is likely not started.
To ensure full connectivity and provider reliability, the following API connections have been verified:
- AI Studio (Gemini): Active and responding via official SDK.
- OpenAI: NOT TESTED (Requires active credits).
- Copilot: Authenticated and functioning as the primary Creative Director.
Note: The built-in error handler will notify you specifically if a rate limit (429) is hit, allowing you to confirm connection even if your quota is exhausted.
- Pre-commit hook via
lefthook:bunx biome check --write {staged_files}bun run typecheck
- No hardcoded API keys in repository
-
.envfiles are excluded from git - Demo assets contain no sensitive/customer data
- CORS restricted to explicit origins in production