Eureka is an intelligent research platform that helps scientists discover connections, identify research gaps, and generate hypotheses using AI-powered knowledge graphs and advanced RAG (Retrieval-Augmented Generation).
- Conversational Research (RAG): Natural language queries with AI-powered answers and citations using HuggingFace embeddings
- Knowledge Graph: Interactive visualization of research relationships and connections
- Autonomous Discovery: HuggingFace-powered AI agents identify research gaps, contradictions, and emerging trends
- Hypothesis Generation: HuggingFace models generate testable research hypotheses from identified gaps
- Pattern Recognition: Multi-agent analysis across domains and methodologies using HuggingFace models
- Upload Papers → PDF processing and text extraction
- RAG Chat → Query papers using semantic search (HuggingFace embeddings)
- Discovery Analysis → HuggingFace models analyze documents for:
- Research gaps identification
- Hypothesis generation
- Trend detection
- Contradiction detection
- Modern Stack: React 18 + TypeScript + Vite + TailwindCSS
- Production Ready: Error boundaries, loading states, lazy loading, accessibility
- Scalable Backend: FastAPI with async/await, PostgreSQL, vector search
- Developer Experience: Hot module replacement, TypeScript strict mode, ESLint ready
git clone <repository-url>
cd eureka.ai# Install dependencies
npm install
# Create environment file
cp .env.example .env
# Start development server
npm run devThe frontend will be available at http://localhost:3000
# Navigate to backend directory
cd backend
# Create virtual environment
python -m venv venv
# Activate virtual environment
# Windows:
venv\Scripts\activate
# Mac/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Start backend server
python -m uvicorn app.main:app --reload --port 8000The backend API will be available at http://localhost:8000
API Documentation: http://localhost:8000/docs
# Development server with hot reload
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview
# Type checking
npm run lint# Development server with auto-reload
cd backend
uvicorn app.main:app --reload --port 8000
# Production server
uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 4
# Run tests (if available)
pytest tests/-
Install Python dependencies:
cd backend pip install -r requirements.txt -
HuggingFace models download automatically on first use:
- Embedding model:
sentence-transformers/all-MiniLM-L6-v2(~90MB) - Discovery model:
google/flan-t5-small(~300MB)
- Embedding model:
-
Optional: Configure models in
backend/app/config.py:DISCOVERY_MODEL: str = "google/flan-t5-small" # or flan-t5-base for better quality HF_USE_LOCAL_GENERATOR: bool = True
Create a .env file in the root directory:
# Backend API URL
VITE_API_BASE_URL=http://localhost:8000/api
# Optional: Debug mode
VITE_DEBUG=trueBackend configuration is managed in backend/app/config.py. Key settings:
- Database connection
- API keys (HuggingFace, etc.)
- Model configurations
- CORS origins
POST /api/documents/upload- Upload a research paper (PDF)GET /api/documents- List all uploaded documentsGET /api/documents/{id}- Get document details
POST /api/queries/ask- Ask a question about uploaded documents{ "question": "What are the main findings?", "document_id": 1, // Optional: query specific document "top_k": 5 }GET /api/queries/history- Get query history
POST /api/discovery/analyze- Run full discovery analysis on all uploaded documents- Uses HuggingFace models to identify gaps, generate hypotheses, detect trends and contradictions
GET /api/discovery/gaps- Get research gaps (from last analysis)GET /api/discovery/hypotheses- Get generated hypothesesGET /api/discovery/trends- Get trending topicsGET /api/discovery/contradictions- Get detected contradictions
POST /api/documents/upload- Upload a research paperGET /api/documents- List all documentsGET /api/documents/{id}- Get document details
Full API Documentation: http://localhost:8000/docs