An AI-powered customer support system that understands your course content and answers student questions automatically using RAG (Retrieval-Augmented Generation).
| Link | Description |
|---|---|
| Live Demo | Try the AI chat |
| Landing Page | Product homepage |
| API Health | Backend status |
- Content-Aware AI: Indexes course videos, PDFs, and text to answer specific content questions
- Multi-Agent RAG Pipeline: Uses LangGraph to orchestrate retrieval and response generation
- Automatic Transcription: Converts video content to searchable text using OpenAI Whisper
- Embed Widget: Easy-to-integrate chat widget for your course platform
- Creator Dashboard: Review conversations, manage content, and monitor performance
- Credit-Based Billing: Integrated with Lemon Squeezy for payments
- Escalation System: Flags uncertain responses for human review
- Next.js 15 - React framework with App Router
- TypeScript - Type safety
- Tailwind CSS - Styling
- Supabase Auth - Authentication
- FastAPI - Python web framework
- LangGraph - Multi-agent orchestration
- LangChain - RAG pipeline
- OpenAI - LLM (GPT-4) and embeddings
- Pinecone - Vector database
- Supabase - PostgreSQL database
- Whisper API - Video transcription
- Node.js 18+ and npm
- Python 3.10+
- OpenAI API key
- Pinecone account
- Supabase project (optional but recommended)
- Lemon Squeezy account (for billing)
-
Navigate to the backend directory:
cd backend -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Copy the environment template:
cp .env.example .env
-
Add your API keys to
.env:OPENAI_API_KEY=your_key_here PINECONE_API_KEY=your_key_here SUPABASE_URL=your_url_here SUPABASE_ANON_KEY=your_key_here LEMON_SQUEEZY_API_KEY=your_key_here -
Run the FastAPI server:
python main.py
The API will be available at
http://localhost:8000
-
Navigate to the frontend directory:
cd frontend -
Install dependencies:
npm install
-
Run the development server:
npm run dev
The app will be available at
http://localhost:3000
If using Supabase:
- Create a new Supabase project
- Run the SQL schema from
backend/schema.sqlin the Supabase SQL editor - Enable Row Level Security policies
- Add your Supabase credentials to
.env
strong_mvp/
├── frontend/
│ ├── src/
│ │ ├── app/
│ │ │ ├── page.tsx # Landing page
│ │ │ ├── demo/ # Chat demo
│ │ │ ├── dashboard/ # Creator dashboard
│ │ │ └── pricing/ # Pricing page
│ │ └── components/
│ │ ├── ChatInterface.tsx # Chat UI
│ │ ├── EmbedWidget.tsx # Widget generator
│ │ └── ContentUpload.tsx # File upload
│ └── package.json
├── backend/
│ ├── app/
│ │ ├── agents/
│ │ │ └── support_agent.py # LangGraph agent
│ │ ├── services/
│ │ │ ├── vector_store.py # Pinecone integration
│ │ │ ├── document_processor.py
│ │ │ ├── content_ingestion.py
│ │ │ └── billing.py # Lemon Squeezy
│ │ └── models/
│ │ └── database.py # Supabase client
│ ├── main.py # FastAPI app
│ ├── requirements.txt
│ └── schema.sql # Database schema
└── README.md
- Go to the dashboard at
/dashboard - Navigate to the "Upload Content" section
- Select content type (PDF, video, or text)
- Upload your course materials
- The system will automatically process and index the content
- Go to the dashboard
- Click on "Embed Widget"
- Copy the JavaScript snippet
- Paste it into your website's HTML
- Visit
/demoto test the chat interface - Ask questions about your uploaded content
- The AI will respond with citations to specific sources
Upload and process course content
Body:
file: File upload (PDF, video, or text)creator_id: Creator's unique IDcontent_type: Type of content (pdf, video, text)title: Optional title
Send a message to the AI assistant
Body:
{
"message": "What did you cover in Module 3?",
"creator_id": "creator-id",
"conversation_id": "optional-conversation-id"
}Response:
{
"response": "In Module 3, we covered...",
"sources": ["Module 3: Advanced Topics (Score: 0.92)"],
"should_escalate": false,
"conversation_id": "conv-123"
}Retrieve conversation history
Query Parameters:
limit: Number of conversations (default: 50)
| Component | Service | URL |
|---|---|---|
| Frontend | Vercel | https://never-afk-ai-lngm.vercel.app |
| Backend | Railway | https://neverafkai-production.up.railway.app |
| Database | Supabase | PostgreSQL (managed) |
| Vector Store | Pinecone | Cloud-hosted |
- Create Railway project
- Set root directory to
/backend - Add environment variables:
OPENAI_API_KEYPINECONE_API_KEYSUPABASE_URLSUPABASE_ANON_KEY
- Deploy via GitHub integration
- Create Vercel project
- Set root directory to
frontend - Add environment variable:
NEXT_PUBLIC_API_URL= Railway backend URL
- Deploy via GitHub integration
Backend:
- Render
- AWS Lambda (with Mangum)
- Google Cloud Run
Frontend:
- Netlify
- AWS Amplify
- Free: 100 responses/month
- Starter ($29/mo): 1,000 responses/month
- Pro ($49/mo): Unlimited responses
- Multi-language support
- Voice responses using ElevenLabs
- Video avatar responses using HeyGen
- Advanced analytics dashboard
- A/B testing for responses
- Integration with Zapier
- Mobile app
This is a personal MVP project. Contributions and feedback are welcome!
MIT License
For issues and questions, please open an issue on GitHub.