A React-based web application that analyzes political speeches for logical fallacies using AI. Upload a video, get a transcript with highlighted fallacies, and click on words to jump to video timestamps.
- π₯ Video Upload: Drag-and-drop or browse to upload political speech videos
- π AI Transcription: Automatically transcribe speech using OpenAI Whisper
- π§ Fallacy Detection: Identify 8 types of logical fallacies using GPT-4o-mini
- π― Interactive Transcript: Click any word to jump to that moment in the video
- π‘ Detailed Explanations: Hover over highlighted text to see fallacy explanations
- π¨ Modern UI: Built with Next.js, shadcn/ui, and Tailwind CSS
- Strawman - Misrepresenting someone's argument
- Ad Hominem - Attacking the person instead of the argument
- False Dichotomy - Presenting only two options when more exist
- Appeal to Emotion - Using emotions instead of logic
- Slippery Slope - Claiming one thing will lead to extreme consequences
- Hasty Generalization - Drawing conclusions from insufficient evidence
- Red Herring - Diverting attention from the real issue
- Circular Reasoning - The conclusion is assumed in the premise
Processing costs (using OpenAI APIs):
- 10-minute video: ~$0.14
- Whisper transcription: $0.06
- GPT-4o-mini analysis: $0.08
With the free $5 OpenAI credit:
- You can analyze ~35 videos (10 minutes each)
- Node.js 18+ installed
- OpenAI API key (get $5 free credit at https://platform.openai.com)
-
Clone or download this project
-
Install dependencies
npm install- Set up your OpenAI API key
Create a .env.local file in the root directory:
NEXT_PUBLIC_OPENAI_API_KEY=your_openai_api_key_hereTo get your API key:
- Go to https://platform.openai.com/api-keys
- Sign up (you'll get $5 free credit)
- Create a new API key
- Copy and paste it into
.env.local
- Run the development server
npm run dev- Open your browser
Navigate to http://localhost:3000
- Drag and drop a video file (MP4, MOV, WebM) onto the upload area
- Or click "Select Video" to browse your files
- Maximum recommended file size: 100MB
- Recommended length: 5-15 minutes for best results
The app will:
- Transcribe the video (~30-60 seconds)
- Analyze for logical fallacies (~20-40 seconds)
- Generate the interactive analysis view
Left Panel:
- Video player with standard controls
- Live transcript that highlights the current word being spoken
Right Panel:
- Full transcript with all detected fallacies highlighted in blue
- Click any word to jump to that timestamp in the video
- Hover over blue highlighted text to see fallacy details
Fallacy Tooltip: When you hover over a highlighted section, you'll see:
- Fallacy type
- Severity level (minor/moderate/severe)
- The exact quote
- Explanation of why it's a fallacy
political-speech-analyzer/
βββ app/
β βββ page.tsx # Upload page
β βββ analysis/page.tsx # Analysis view
β βββ layout.tsx # Root layout
β βββ globals.css # Global styles
βββ components/
β βββ ui/ # shadcn/ui components
β β βββ button.tsx
β β βββ card.tsx
β β βββ progress.tsx
β β βββ tooltip.tsx
β βββ VideoUpload.tsx # Upload component
β βββ VideoPlayer.tsx # Video player with live transcript
β βββ TranscriptView.tsx # Interactive transcript
β βββ FallacyTooltip.tsx # Fallacy explanation popup
β βββ ProcessingView.tsx # Loading state
βββ lib/
β βββ types.ts # TypeScript types
β βββ utils.ts # Utility functions
β βββ openai.ts # OpenAI API integration
βββ public/ # Static assets
- Framework: Next.js 14 (App Router)
- Language: TypeScript
- Styling: Tailwind CSS
- UI Components: shadcn/ui (Radix UI primitives)
- AI APIs: OpenAI (Whisper + GPT-4o-mini)
- Video Upload: File is stored in browser memory (not uploaded to any server)
- Transcription: Video sent to OpenAI Whisper API, returns transcript with word-level timestamps
- Analysis: Transcript sent to GPT-4o-mini with custom prompt to detect logical fallacies
- Display: Results stored in sessionStorage and rendered in interactive UI
- Interaction: Click events and hover states controlled by React state management
- Local Only: Videos and analysis results stay in your browser
- No Database: Uses sessionStorage for temporary data
- Privacy: Nothing is stored on any server
- OpenAI API Key: Required for transcription and analysis
- Browser Only: API calls made from browser (not production-ready security)
- Session Storage: Data cleared when you close the browser tab
- File Size: Large videos (>100MB) may cause performance issues
- Accuracy: AI detection is not perfect - results should be reviewed critically
For a production deployment, you should:
- Move API calls to server-side (use Next.js API routes)
- Add proper authentication
- Store videos and results in a database
- Add rate limiting
- Implement proper error handling
- Add user accounts
- Deploy to Vercel or similar platform
- Check that your OpenAI API key is correct in
.env.local - Ensure you have credit in your OpenAI account
- Try a smaller video file
- Make sure the video format is supported (MP4, MOV, WebM)
- Try using Chrome or Firefox
- The speech may genuinely have no fallacies
- Try a more clearly political/argumentative speech
- AI detection isn't perfect - some fallacies may be missed
- Check your OpenAI usage at https://platform.openai.com/usage
- Add payment method or wait for monthly reset
Potential features to add:
- Support for multiple speakers
- Fact-checking integration
- Export results as PDF
- Side-by-side comparison of two speeches
- Real-time analysis of live streams
- Community voting on fallacy accuracy
- Browser extension for YouTube videos
MIT License - feel free to use and modify as needed.
This tool is for educational and analytical purposes. The AI-detected fallacies should not be considered definitive and should be reviewed critically. Political analysis requires human judgment and context.