A voice-based planning interface for Devin that uses DeepWiki for codebase context.
-
Copy
.env.exampleto.envand add your API keys:cp .env.example .env
Required keys:
OPENAI_API_KEY- For Whisper speech-to-textANTHROPIC_API_KEY- For Claude AIDEVIN_API_KEY- Your Devin API key
-
Install dependencies:
npm install
-
Start the app:
npm run dev
-
Open http://localhost:5173 in your browser
- Enter a repository URL (GitHub, GitLab, etc.)
- Click "Start Planning"
- Click the microphone button and start talking about what you want to build
- See your words transcribed live as you speak
- AI responds with questions and generates tasks for Devin
- Live speech transcription as you talk
- Conversational AI planning with Claude
- DeepWiki integration for codebase context (coming soon)
- Task generation for Devin
- Frontend: Vite + React + TypeScript (port 5173)
- Backend: Express + TypeScript (port 3001)
- Voice: OpenAI Whisper (STT) + Claude + OpenAI TTS
- Context: DeepWiki MCP for codebase knowledge
To complete the integration:
- DeepWiki MCP Integration - Connect to Cognition's DeepWiki MCP server in
backend/src/services/mcp-client.ts - Claude MCP Support - Wire up MCP tools to Claude's API calls
- Task Generation - Add structured task output format
- Devin API - Submit generated tasks to Devin