A web application for practicing debate with an AI opponent. Create structured debates with multiple rounds, submit turns via text or audio transcription, and get AI-generated responses.
- FastAPI - Modern Python web framework
- Uvicorn - ASGI server
- Pydantic - Data validation and models
- OpenAI API - GPT-4o-mini for debate responses and Whisper-1 for audio transcription
- python-dotenv - Environment variable management
- React - UI library
- Vite - Build tool and dev server
- Fetch API - For HTTP requests
- Python 3.9 or higher
- Node.js 16+ and npm (or yarn)
- pip (Python package manager)
- OpenAI API key (optional, but required for full functionality)
git clone <repository-url>
cd AI-Debate-TutorInstall the required packages:
pip install fastapi uvicorn openai pydantic python-dotenvOr if you prefer using a virtual environment (recommended):
cd backend
python3 -m venv venv
source venv/bin/activate # On macOS/Linux
# or: venv\Scripts\activate # On Windows
pip install fastapi uvicorn openai pydantic python-dotenvCreate a .env file in the backend directory:
cd backend
touch .envAdd your OpenAI API key to the .env file:
OPENAI_API_KEY=your_openai_api_key_here
Important:
- The variable name must be
OPENAI_API_KEY(notOPEN_API_KEY) - The
.envfile is already in.gitignoreto keep your API key secure - The app will work without an API key but will use stub responses for AI turns
Navigate to the backend directory and start the server:
cd backend
python3 -m uvicorn app.main:app --reload --port 8000Note: Use python3 -m uvicorn instead of just uvicorn if the command is not found in your PATH.
The server will start at http://localhost:8000. You should see:
INFO: Uvicorn running on http://127.0.0.1:8000
INFO: Application startup complete.
Navigate to the frontend directory and install Node.js dependencies:
cd frontend
npm installnpm run devThe React app will start at http://localhost:3000 and should automatically open in your browser.
The frontend is configured to connect to http://localhost:8000 by default. You can change the API base URL in the UI if needed.
GET /v1/health- Health checkPOST /v1/debates- Create a new debateGET /v1/debates/{debate_id}- Get debate state and messagesPOST /v1/debates/{debate_id}/turns- Submit a turnPOST /v1/debates/{debate_id}/auto-turn- Generate AI assistant turnPOST /v1/debates/{debate_id}/finish- Finish debate earlyPOST /v1/transcribe- Transcribe audio file
Use python3 -m uvicorn instead of just uvicorn:
python3 -m uvicorn app.main:app --reload --port 8000Kill the process using port 8000:
lsof -ti:8000 | xargs killOr use a different port:
python3 -m uvicorn app.main:app --reload --port 8001Then update the API base URL in the frontend UI.
This project requires Python 3.9+. If you encounter syntax errors with str | None, ensure you're using Python 3.9 or higher. The code uses Optional[str] for compatibility.
- Verify the
.envfile is in thebackenddirectory - Check that the variable name is exactly
OPENAI_API_KEY(case-sensitive) - Restart the server after creating or modifying the
.envfile - Remove quotes if your API key is wrapped in quotes (some systems handle this differently)
The frontend uses Vite for fast development with hot module replacement. The app will automatically reload when you make changes.
To build for production:
cd frontend
npm run buildThe built files will be in the frontend/dist directory.
The backend uses in-memory storage, so data is not persisted between server restarts. This is suitable for development and testing.
For production deployment, consider:
- Adding a database (PostgreSQL, MongoDB, etc.)
- Implementing user authentication
- Adding rate limiting
- Setting up proper CORS configuration
- Using environment-specific configuration
This application can be deployed using Railway for the backend and Vercel for the frontend.
- GitHub account
- Railway account (sign up at railway.app)
- Vercel account (sign up at vercel.com)
- OpenAI API key
-
Push your code to GitHub (if not already done):
git add . git commit -m "Prepare for deployment" git push origin main
-
Create a new Railway project:
- Go to railway.app and sign in
- Click "New Project"
- Select "Deploy from GitHub repo"
- Choose your repository
- Railway will auto-detect the Python backend
-
Configure environment variables:
- In your Railway project dashboard, go to "Variables"
- Add the following environment variables:
OPENAI_API_KEY=your_openai_api_key_here CORS_ORIGINS=https://your-frontend-url.vercel.app SCORING_MODEL=gpt-4o - Important: Replace
your-frontend-url.vercel.appwith your actual Vercel deployment URL
-
Deploy:
- Railway will automatically detect the
Procfileand start the server - The backend will be available at
https://your-project-name.up.railway.app - Note the deployment URL - you'll need it for the frontend
- Railway will automatically detect the
-
Verify deployment:
- Visit
https://your-backend-url.up.railway.app/v1/healthto verify the backend is running - Visit
https://your-backend-url.up.railway.app/docsto view API documentation
- Visit
-
Install Vercel CLI (optional, you can also use the web interface):
npm i -g vercel
-
Deploy from GitHub (recommended):
- Go to vercel.com and sign in
- Click "Add New Project"
- Import your GitHub repository
- Configure the project:
- Root Directory:
frontend - Framework Preset: Vite
- Build Command:
npm run build - Output Directory:
dist
- Root Directory:
-
Set environment variables:
- In your Vercel project settings, go to "Environment Variables"
- Add:
VITE_API_BASE_URL=https://your-backend-url.up.railway.app - Replace
your-backend-url.up.railway.appwith your Railway backend URL
-
Deploy:
- Click "Deploy"
- Vercel will build and deploy your frontend
- Your app will be available at
https://your-project-name.vercel.app
-
Update CORS in Railway:
- Go back to Railway dashboard
- Update the
CORS_ORIGINSvariable to include your Vercel URL:CORS_ORIGINS=https://your-project-name.vercel.app - Railway will automatically redeploy with the new CORS settings
| Variable | Required | Default | Description |
|---|---|---|---|
OPENAI_API_KEY |
Yes | - | Your OpenAI API key |
CORS_ORIGINS |
No | * |
Comma-separated list of allowed origins (e.g., https://app.vercel.app) |
SCORING_MODEL |
No | gpt-4o |
Model to use for scoring (gpt-4o, gpt-4o-mini, gpt-4-turbo) |
SPEECH_CORPUS_DIR |
No | ./app/corpus |
Directory for RAG corpus files |
PORT |
Auto-set | - | Port number (automatically set by Railway) |
| Variable | Required | Default | Description |
|---|---|---|---|
VITE_API_BASE_URL |
Yes | http://localhost:8000 |
Backend API URL (your Railway deployment URL) |
- Port errors: Railway automatically sets
PORT- don't override it - Build failures: Ensure
requirements.txtincludes all dependencies - CORS errors: Verify
CORS_ORIGINSincludes your Vercel URL (no trailing slash) - API key errors: Double-check
OPENAI_API_KEYis set correctly in Railway
- API connection errors: Verify
VITE_API_BASE_URLpoints to your Railway backend - Build failures: Ensure all dependencies are in
package.json - CORS errors: Check that Railway
CORS_ORIGINSincludes your Vercel URL
Create a .env file in the backend directory for local development:
# Copy backend/.env.example to backend/.env and fill in your values
OPENAI_API_KEY=your_openai_api_key_here
CORS_ORIGINS=http://localhost:3000
SCORING_MODEL=gpt-4o
SPEECH_CORPUS_DIR=./app/corpusNote: The .env file is gitignored and should never be committed to version control.
[Add your license here]