-
Chrome Extension
- Clean chat-based interview interface
- Uses random Leetcode problem from Neetcode 150
- Display AI-generated questions with TTS audio playback
- End-of-session feedback with strengths, weaknesses, and actionable improvements
-
FastAPI Backend
- POST
/chat- Send messages to AI interviewer - POST
/feedback- Get end-of-session feedback - GET
/health- Check server and Ollama status - Async endpoints for performance
- POST
-
AI Integration
- Local Ollama LLM Llama 3.2
- LeetCode-style interview conversation flow
- Tracks candidate progress for feedback
-
Text To Speech
- Local Kokoro-onnx TTS for audio generation
- AI responses converted to speech
- Live Kit Integration
- Microphone Input and STT
- General Tutuoring
-
Python 3.10+
python --version -
Ollama (Local LLM runtime)
- Download and install: https://ollama.ai/download
-
Kokoro-onnx TTS
- Download a Voice Model (~300 MB):
# adds .onnx model file to backend curl -L \ https://github.com/thewh1teagle/kokoro-onnx/releases/download/model-files-v1.0/kokoro-v1.0.onnx \ -o backend/voices/kokoro-v1.0.onnx
- Download a Voice Model (~300 MB):
-
Google Chrome (for extension)
# Start Ollama service
ollama serve
# In a new terminal, pull the AI model (only once)
ollama pull llama3.2:latest
# Check downloaded models
ollama listcd backend
# Create virtual environment (recommended)
python -m venv venv
.\venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt# Make sure you're in the backend directory
cd backend
# Run the server
python app.pyServer will start at: http://localhost:8000
Check health: http://localhost:8000/health
-
Open Chrome and go to:
chrome://extensions/ -
Enable Developer mode (toggle in top-right)
-
Click "Load unpacked"
-
Navigate to and select the
chrome-extensionfolder -
The extension should now appear in your extensions list
-
Make sure backend is running:
- Ollama service:
ollama serve - FastAPI server:
python app.py(in backend folder)
- Ollama service:
-
Open the extension by clicking the icon in Chrome
- Open the side panel to keep extension open
-
Configure interview:
- Interview Type: SWE Intern
- Question Type: LeetCode-style DSA
-
Click "Start Interview"
-
The AI interviewer will:
- Introduce itself
- Open Leetcode
- Describe the first LeetCode-style question
- Play audio
- Type your approach or solution in the text box
- Ask for hints if you're stuck
- Clarify the question (e.g., "What are the input constraints?")
- The AI will keep track of all questions, and inputs (leetcode code parsing coming soon)
-
Click "End Interview" button
-
The AI will generate feedback with:
- Strengths
- Weaknesses
- Actionable Improvement
-
Click "Return to Home Page" to go back to interview setup
Send a message to the AI interviewer.
Request:
{
"messages": [
{"role": "system", "content": "System prompt"},
{"role": "user", "content": "User message"}
],
"session_id": "unique-session-id"
}Response:
{
"message": "AI response text",
"audio_url": "/audio/speech_123.wav"
}Get end-of-session feedback.
Request:
{
"messages": [...],
"session_id": "unique-session-id"
}Response:
{
"feedback": {
"strengths": "Good problem-solving approach...",
"weaknesses": "Could improve time complexity analysis...",
"improvement": "Practice more dynamic programming problems"
}
}Check server health and Ollama connection.
Response:
{
"status": "healthy",
"ollama": "connected",
"sessions": 3
}- FastAPI - Web framework
- Uvicorn - ASGI server
- httpx - Async HTTP client for Ollama
- pydantic - Data validation
- kokoro-onnx - Text-to-speech
- No external dependencies
- Ollama - Local LLM runtime
- Kokoro-onnx - Local TTS
Fully Local - All data stays on your machine
- No cloud API calls
- No data collection
- No internet required
Free & Open Source
- All components are free
- No API keys needed
- No subscription costs
Happy interviewing!