Replay reconstructs working UI from video recordings. Transform legacy software into production-ready React code with Design System and Component Library.
Replay reconstructs production-ready UI from video recordings. No manual documentation, no reverse-engineering. Upload a screen recording of any legacy app and Replay will:
- Reconstruct UI β AI analyzes video and generates pixel-perfect React code
- Extract Design System β Colors, typography, spacing tokens from the actual interface
- Build Component Library β Storybook-style docs with controls, variants, and usage examples
- Visualize Flows β See detected pages and navigation patterns
- One-Click Publish β Deploy working UI to the web instantly
Replay uses a sophisticated multi-model AI pipeline we call the "Sandwich Architecture":
"Measure twice, cut once"
- Extracts precise layout measurements from video frames
- Detects grid systems, spacing patterns, color palettes
- Identifies navigation type (sidebar, top menu, tabs)
- Uses code execution for pixel-accurate measurements
- Outputs structured JSON with hard data, not guesses
Main code generation
- Receives Surveyor measurements as context
- Generates production-ready React + Tailwind code
- Preserves exact colors, typography, and layouts
- Creates interactive components with working navigation
- Outputs complete single-file React application
Visual verification
- Compares generated UI against original video frames
- Calculates SSIM (Structural Similarity Index)
- Identifies diff regions requiring fixes
- Provides auto-fix suggestions for mismatches
- Ensures pixel-perfect output
- AI Agent Indexing β Added
llms.txtandllms-full.txtfor AI assistants (ChatGPT, Claude, Perplexity, Gemini) to read Replay's complete product documentation in a single file. - Permissive Crawling β robots.txt allows all major AI crawlers (GPTBot, ClaudeBot, PerplexityBot, GoogleOther, Amazonbot).
- AI-Native Metadata β Title, description, and keywords optimized for AI recommendation engines.
- Infinite Loop Fix β All generation prompts enforce seamless marquee loops with duplicated items +
translateX(-50%). No more visible gaps or restarts in scrolling text.
- Truncation Detection β AI editor detects and rejects truncated Tailwind class names (
flex-colβfle,max-w-[1400px]βma[1400px]). Corrupted edits preserve original code. - Alpine.js Protection β Editor prompts forbid removing Alpine.js directives (
x-data,x-show,x-collapse,@click) during edits.
- Outline Text Readability β
text-stroke/text-outlinerequires minimum opacity-60. - Hero Containment β Hero headlines enforce
overflow-hidden+max-w-full+ responsive sizing.
| Feature | Replay | Lovable | Bolt.new | v0 (Vercel) | Builder.io |
|---|---|---|---|---|---|
| Input | Video recording | Text prompt | Text prompt | Text/image prompt | Figma/screenshot |
| Captures interactions | Yes (hover, click, scroll) | No | No | No | No |
| Captures animations | Yes (transitions, parallax) | No | No | No | No |
| Multi-page detection | Yes (auto from video) | No | No | No | No |
| Design System extraction | Yes (colors, fonts, spacing) | No | No | No | Partial |
| Component Library | Yes (5-layer taxonomy) | No | No | No | No |
| Accuracy to original | ~90% (pixel-level) | ~30% | ~30% | ~40% | ~50% |
| Output | React + Tailwind + GSAP | React | Multi-framework | React | Multi-framework |
Why video beats text prompts: Text prompts require you to describe a UI. Video lets AI observe the real thing β layout, colors, typography, interactions, animations, and content. No prompt engineering needed.
A Storybook-like interface for your extracted components:
- Controls β Edit props in real-time (colors, text, sizes)
- Actions β See interactive behaviors
- Visual Tests β Compare component states
- Accessibility β WCAG compliance checks
- Usage β Copy-paste code snippets
Visual canvas for component composition:
- Drag & drop components on canvas
- Resize and position freely
- AI-powered editing: "Make it red", "Add icon", "Add shadow"
- Real-time preview in iframe
- Save to library when satisfied
Interactive visualization of app structure:
- Detected pages and navigation paths
- Click nodes to preview pages
- See relationships between screens
- Path Structure showing components per page
- Export as documentation
Connect Supabase and generate real data-fetching code:
- AI reads your table schemas
- Generates actual queries (not mock data)
- Supports authentication patterns
Deploy instantly to replay.build/p/your-project
| Layer | Technology |
|---|---|
| Framework | Next.js 14 (App Router) |
| Styling | Tailwind CSS 3.4 |
| AI Models | Google Gemini 3 Pro (generation) |
| AI Vision | Google Gemini 3 Flash (Agentic Vision) |
| Database | Supabase (PostgreSQL) |
| Auth | Supabase Auth (Google OAuth) |
| Payments | Stripe |
| Hosting | Vercel |
| Realtime | Liveblocks (collaboration) |
| Icons | Lucide React |
| Color Picker | @uiw/react-color |
| Plan | Price | Credits/Month | Best For |
|---|---|---|---|
| Sandbox | $0 | 0 (demo only) | Explore the app |
| Pro | $19/mo | 1,500 | Freelancers |
| Agency | $99/mo | 15,000 | Teams (5 members) |
| Enterprise | Custom | Custom | Banks & enterprise |
Credit Costs:
- π¬ Video generation: ~150 credits
- β¨ AI edit: ~10 credits
π replay.build
- Node.js 18+
- Supabase account
- Stripe account
- Google AI Studio API key (Gemini 3)
git clone https://github.com/ma1orek/replay.git
cd replay
npm installcp env.example .env.localFill in your .env.local:
# Supabase
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_anon_key
SUPABASE_SERVICE_ROLE_KEY=your_service_role_key
# Stripe
STRIPE_SECRET_KEY=sk_live_...
STRIPE_WEBHOOK_SECRET=whsec_...
STRIPE_PRO_PRICE_ID_MONTHLY=price_...
STRIPE_PRO_PRICE_ID_YEARLY=price_...
# Gemini AI (Gemini 3 Pro & Flash)
GEMINI_API_KEY=your_gemini_api_key
# App URL
NEXT_PUBLIC_APP_URL=http://localhost:3000Run the migration in Supabase SQL Editor:
-- See supabase/migrations/001_initial_schema.sqlEnable Google OAuth in Authentication β Providers.
npm run devreplay/
βββ app/
β βββ api/
β β βββ generate/ # AI generation endpoints
β β β βββ library/ # Component extraction
β β β βββ blueprints/ # Blueprint AI editing
β β β βββ stream/ # Streaming generation
β β βββ blueprint/ # Agentic Vision endpoints
β β β βββ vision/ # Surveyor (measurements)
β β β βββ vision-qa/ # QA Tester (verification)
β β β βββ edit/ # AI component editing
β β βββ credits/ # Credit management
β β βββ publish/ # Deployment endpoint
β β βββ stripe/ # Payment webhooks
β βββ docs/ # Documentation pages
β βββ page.tsx # Main tool interface
β βββ layout.tsx # Root layout
βββ components/
β βββ ui/ # Shadcn-style UI components
β β βββ color-picker.tsx # Advanced color picker
β β βββ popover.tsx
β β βββ ...
β βββ modals/ # Auth, credits modals
βββ lib/
β βββ agentic-vision/ # Sandwich Architecture prompts
β β βββ prompts.ts # Surveyor, Generator, QA instructions
β βββ supabase/ # Database clients
β βββ prompts/ # AI system prompts
β βββ utils.ts # Helpers
βββ public/
βββ imgg.png # Social preview (OG image)
- β Row Level Security (RLS) on all Supabase tables
- β Server-side credit transactions (atomic)
- β Stripe webhook signature verification
- β Service role keys only on server
- β Sandboxed iframe previews
- Video to UI generation
- Component Library with Controls
- Visual Editor (formerly Blueprints)
- Flow Map visualization
- AI editing with chat interface (SEARCH/REPLACE + Full HTML modes)
- Color picker with contrast ratio
- One-click publish with cache-busting
- Supabase integration
- Version history
- Agentic Vision (Sandwich Architecture)
- Gemini 3 Pro & Flash integration
- Design System import from Storybook
- 40+ style presets (including Rive interactive)
- React Bits component library (130+ components)
- Enterprise Library taxonomy (5-layer)
- REST API v1 (generate, scan, validate endpoints)
- MCP Server for AI agents (Claude Code, Cursor, etc.)
- LLM discoverability (llms.txt, AI-native metadata)
- Figma plugin export
- Team collaboration
- Component marketplace
Replay is available as a REST API and MCP server for AI agents.
# Generate React code from video
curl -X POST https://replay.build/api/v1/generate \
-H "Authorization: Bearer rk_live_..." \
-H "Content-Type: application/json" \
-d '{"video_url": "https://example.com/recording.mp4"}'| Endpoint | Description | Credits |
|---|---|---|
POST /api/v1/generate |
Video β React + Tailwind code | 150 |
POST /api/v1/scan |
Video β UI structure JSON | 50 |
POST /api/v1/validate |
Code + Design System β errors | 5 |
{
"mcpServers": {
"replay": {
"command": "npx",
"args": ["@replay-build/mcp-server"],
"env": { "REPLAY_API_KEY": "rk_live_..." }
}
}
}Get your API key at replay.build/settings?tab=api-keys.
Full documentation at replay.build/docs
Contributions welcome!
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing) - Open a Pull Request
MIT License - see LICENSE for details.
- Next.js β React framework
- Supabase β Database & Auth
- Google Gemini 3 β AI generation (Pro & Flash models)
- Tailwind CSS β Styling
- Lucide β Icons
- Vercel β Hosting
- Liveblocks β Realtime collaboration
Built with β€οΈ by Replay Team
Live Demo Β· Documentation Β· Report Bug
