Transform any YouTube video into an interactive learning experience! Extract transcripts and chat with AI to understand content better.
Your AI-powered YouTube learning assistant is ready to transform any video into an interactive learning experience!
🎥 Universal Video Support - Works with ANY YouTube video using our smart dual-approach system
🤖 AI-Powered Chat - Ask questions and get intelligent answers about video content
⚡ Lightning Fast - Modern React frontend with responsive design
🔄 Smart Fallback - YouTube Transcript API → OpenAI Whisper (rate-limit proof!)
🎨 Beautiful UI - Tailwind CSS with mobile-first design
📱 Cross-Platform - Works on desktop, tablet, and mobile
🆓 Cost Effective - Free YouTube transcripts when available, Whisper only as backup
- Paste YouTube URL → Our system extracts the video ID
- Smart Transcript Extraction:
- 🔄 First attempt: YouTube Transcript API (fast & free)
- 🎵 Fallback: OpenAI Whisper (extracts audio + transcribes)
- AI Analysis → GPT-4 processes the transcript
- Interactive Chat → Ask anything about the video content!
- Python 3.8+
- Node.js 16+
- OpenAI API Key (Get one here)
git clone https://github.com/Pra-soon/Study-Help.git
cd Study-Help
# Create environment file
echo "OPENAI_API_KEY=your_api_key_here" > .env# Install Python dependencies
pip install -r requirements.txt
# Start backend server
python -m uvicorn backend.main:app --host 127.0.0.1 --port 8000 --reload# Install and start frontend
cd frontend
npm install
npm start- App: http://localhost:3000
- API Docs: http://localhost:8000/docs
- Open http://localhost:3000
- Paste any YouTube video URL
- Click "Process Video" (wait 10-60 seconds)
- Click "Start Chatting About This Video"
- Ask questions about the content!
- "What are the main points discussed?"
- "Can you summarize this in bullet points?"
- "What does the speaker say about [specific topic]?"
- "What are the key takeaways?"
✅ Educational content (Khan Academy, TED Talks, etc.)
✅ Tutorials and how-to videos
✅ Lectures and presentations
✅ Music videos (lyrics transcription via Whisper!)
✅ Podcasts and interviews
✅ Any video with or without captions
┌─ Frontend (React 18 + Tailwind CSS)
│ ├─ HomePage.js (Video processing)
│ ├─ ChatPage.js (AI chat interface)
│ └─ Responsive design with Lucide icons
│
├─ Backend (FastAPI + Python)
│ ├─ Video processing endpoints
│ ├─ OpenAI chat integration
│ └─ Automatic API documentation
│
└─ AI Services
├─ YouTube Transcript API (primary)
├─ OpenAI Whisper (fallback)
└─ GPT-4 for Q&A
# Dual-approach for maximum reliability
def get_transcript(video_id):
try:
# 🔄 Try YouTube API first (fast & free)
return youtube_transcript_api.get_transcript(video_id)
except:
# 🎵 Fallback to Whisper (reliable & universal)
audio = extract_audio_with_yt_dlp(video_id)
return openai_whisper.transcribe(audio)Study-Help/
├── 🔧 backend/
│ ├── main.py # FastAPI app with all endpoints
│ └── requirements.txt # Python dependencies
├── 🎨 frontend/
│ ├── src/components/
│ │ ├── HomePage.js # Landing + video processing
│ │ └── ChatPage.js # AI chat interface
│ ├── package.json # Node.js dependencies
│ └── tailwind.config.js # Styling configuration
├── ⚙️ utils/
│ ├── youtube.py # Video processing + Whisper fallback
│ └── openai_helper.py # GPT-4 chat integration
├── 📄 .env # API keys (create this!)
├── 📋 requirements.txt # All Python packages
└── 📖 README.md # This file
POST /api/process-video
Content-Type: application/json
{
"url": "https://www.youtube.com/watch?v=VIDEO_ID"
}Response:
{
"success": true,
"video_id": "VIDEO_ID",
"transcript": "Full video transcript...",
"video_url": "https://www.youtube.com/watch?v=VIDEO_ID",
"message": "Video processed successfully"
}POST /api/chat
Content-Type: application/json
{
"transcript": "Video transcript content...",
"question": "What is this video about?"
}Response:
{
"response": "This video discusses..."
}GET /- Health checkGET /health- System statusGET /api/sample-videos- Get example videos to testGET /docs- Interactive API documentation
Edit the prompt in utils/openai_helper.py:
prompt = f"""You are a [CUSTOMIZE THIS] AI that answers questions about YouTube videos...
Guidelines for your response:
1. [ADD YOUR GUIDELINES]
2. [CUSTOMIZE TONE AND STYLE]
3. [SET RESPONSE FORMAT]
...- Colors: Edit
frontend/tailwind.config.js - Layout: Modify
frontend/src/components/ - Icons: Replace Lucide React icons
- Backend: Add endpoints in
backend/main.py - Frontend: Create components in
frontend/src/components/ - AI: Enhance prompts in
utils/openai_helper.py
- YouTube Transcript API: Free (primary method)
- Most educational videos work without Whisper costs
- Rate-limited videos
- Videos without transcripts
- Music videos (for lyrics)
- Cost: ~$0.006 per minute of audio
- Use educational content (usually has free transcripts)
- Avoid repeated processing of same video
- Add caching (future enhancement)
| Issue | Solution |
|---|---|
| "Issue with processing video" | Video might not have transcripts - Whisper will handle it |
| CORS errors | Ensure backend is running on port 8000 |
| OpenAI API errors | Check your API key and account credits |
| Frontend won't start | Run npm install in frontend directory |
| Module not found | Ensure you're in the project root directory |
# Backend with detailed logging
python -m uvicorn backend.main:app --log-level debug
# Check OpenAI API status
curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/modelsCreate .env file in project root:
# Required
OPENAI_API_KEY=sk-your-openai-api-key-here
# Optional (for future features)
# YOUTUBE_API_KEY=your-youtube-api-key
# DATABASE_URL=sqlite:///./study_help.db# Frontend (Vercel)
cd frontend && vercel
# Backend (Railway)
railway login && railway deploy# Coming soon - containerized deployment
docker-compose up -dWe love contributions! Here's how to get started:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Setup development environment
git clone https://github.com/Pra-soon/Study-Help.git
cd Study-Help
# Backend development
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# Frontend development
cd frontend && npm installThis project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI - For GPT-4 and Whisper APIs
- YouTube Transcript API - For free transcript access
- FastAPI - For the amazing Python web framework
- React Team - For the fantastic frontend library
- Tailwind CSS - For the utility-first CSS framework
- 🐛 Bug Reports: Open an issue
- 💡 Feature Requests: Start a discussion
- 📧 Contact: GitHub Profile
⭐ Star this repo if it helped you learn better! ⭐
Made with ❤️ for learners everywhere