A FastAPI-based engine that computes the success probability of startup ideas using LLM feature extraction combined with rigorous, math-only weighted Bayesian scoring.
- LLM Feature Extraction: Extracts 10+ standardized features (novelty, competition, readiness, etc.) from raw natural language startup ideas.
- Multi-Provider AI: Easily toggle between Anthropic, OpenAI, or OpenRouter just by updating your
.envfile. - Bayesian Scoring: Computes the actual probability using a pure mathematical confidence intervals based on established startup research data (no LLM guessing).
- Evidence-Based Updates: Log milestones, pivots, or setbacks as "evidence" and let the engine auto-adjust the probability mathematically using Sequential Bayesian Updating.
- High Performance: Asynchronous PostgreSQL with SQLAlchemy and Redis caching makes queries lightning fast.
- API Layer: FastAPI endpoints for submitting ideas and tracking history.
- LLM Provider Factory: Connects to Claude, ChatGPT, or OpenRouter for pure data extraction.
- ML Engine:
bayesian_updater.pyand calibration tools run the math based onfeature_weights.json. - Storage: Postgres for persistence, Redis for score history and rapid cache lookups.
- Event Streaming: Redis Pub/Sub streams for async task processing.
- Docker & Docker Compose
- Python 3.12+
Copy the .env template:
cp .env.example .envAdd your API keys inside .env:
LLM_PROVIDER=anthropic # Choose: anthropic, openai, openrouter
ANTHROPIC_API_KEY=your_key # Required if using anthropic
OPENAI_API_KEY=your_key # Required if using openaidocker compose up -d postgres redispython3 -m venv venv
source venv/bin/activate
pip install -r requirements.txtuvicorn app.main:app --reloadInteractive API Documentation will be available at: http://localhost:8000/api/v1/docs
To run the entire stack (API, database, and cache) inside Docker:
docker compose -f docker-compose.yml up --build