A full-stack application with FastAPI backend, React frontend, and LLM-powered extremist content detection.
Start all services:
./start.shStop all services:
./stop.shServices will be available at:
- Frontend: http://localhost:5173
- Backend: http://localhost:8000
- LLM Agent: http://localhost:8001
Start all services with Docker:
docker-compose up --buildStop all services:
docker-compose downjunctionx_clappers/
├── backend/ # FastAPI backend application
│ ├── app/ # Application code
│ ├── Dockerfile
│ └── requirements.txt
├── frontend/ # React + Vite frontend
│ ├── src/ # React components and logic
│ ├── Dockerfile
│ └── package.json
├── llm_agent/ # LLM-based content detection service
│ ├── agent/ # Detection agent logic
│ ├── Dockerfile
│ ├── entrypoint.sh # Ollama setup script
│ └── requirements.txt
├── docker-compose.yml # Main Docker orchestration
├── start.sh # Start all services locally
└── stop.sh # Stop all services
- FastAPI web framework
- MySQL database with SQLAlchemy ORM
- Audio file upload and processing
- Integration with LLM agent for content detection
- React 18 with TypeScript
- Vite for fast development and building
- Tailwind CSS for styling
- shadcn/ui component library
- Audio upload and transcription interface
- Qwen 3 8B language model via Ollama
- Extremist content detection
- LangGraph-based agent architecture
- FastAPI service with health checks
- mamba (conda alternative)
- Ollama (https://ollama.com)
- Node.js 18+
- Docker (for MySQL)
- Docker and Docker Compose
If you prefer to run services individually instead of using ./start.sh:
cd backend
mamba create -n backend python=3.10 -y
mamba run -n backend pip install -r requirements.txt
docker-compose up -d mysql # Start MySQL only
mamba run -n backend uvicorn app.main:app --reload --port 8000cd frontend
npm install
npm run devcd llm_agent
mamba create -n llm_agent python=3.11 -y
mamba run -n llm_agent pip install -r requirements.txt
ollama serve & # Start Ollama
ollama pull qwen3:8b # Pull model
mamba run -n llm_agent uvicorn api:app --port 8001Once services are running, visit:
- Backend: http://localhost:8000/docs
- LLM Agent: http://localhost:8001/docs
GET /- Root endpointGET /health- Health checkPOST /upload-audio- Upload audio file for transcription
GET /- Service informationGET /health- Health checkPOST /detect- Detect extremist content in text
- FastAPI: Modern, fast web framework for building APIs
- SQLAlchemy: SQL toolkit and ORM
- MySQL: Database
- Uvicorn: ASGI server
- React: UI library
- TypeScript: Type-safe JavaScript
- Vite: Build tool and dev server
- Tailwind CSS: Utility-first CSS framework
- shadcn/ui: Re-usable component library
- Ollama: Local LLM inference
- Qwen 3 8B: Language model
- LangGraph: Agent orchestration framework
- FastAPI: Service API
To make the application in the short time span we relied heavily on various AI tools, such as LLM webchats, Github Copilot and Claude Code.
MIT