AI-Powered Food Classification Web Application
Snap a picture of your food and let AI identify the dish instantly!
- 🌟 Features
- 🏗️ Project Structure
- 🛠️ Tech Stack
- 🚀 Quick Start
- 📖 Installation Documentation
- ⚙️ Configuration
- 📖 Detailed Setup
- 🤝 Contributing
- 📝 API Documentation
- 🧪 Testing
- 📊 Model Information
- 🐛 Troubleshooting
- 📄 License
- 📸 Image Upload & Preview: Drag-and-drop or click to upload food images
- 🤖 AI-Powered Classification: ResNet18 model trained on Nigerian dishes
- 📊 Confidence Scores: Get prediction confidence percentages
- 🗂️ Automatic Organization: Images saved to predicted class folders
- ⚡ Real-time Processing: Instant classification results
- 📱 Responsive Design: Works seamlessly on desktop, tablet, and mobile
- 🎭 Modern UI: Built with TailwindCSS and React components
- 🔄 Loading States: Visual feedback during processing
- ❌ Error Handling: User-friendly error messages and recovery
- 🌙 Dark Mode Support: Comfortable viewing in any lighting
- 🌍 Internationalization (i18n): Multi-language support (English, French, Arabic, Yoruba) with RTL layout
- 📡 RESTful API: Clean API endpoints for integration
- 🧪 Comprehensive Testing: Unit, integration, and E2E tests
- 📝 Type Safety: Full TypeScript implementation
- 🐳 Docker Support: Containerized deployment ready
- 📊 Analytics: Classification history and insights
FlavorSnap follows a modular microservices architecture with clear separation of concerns. For complete documentation, see Project Structure Documentation.
flavorsnap/
├── 📁 frontend/ # Next.js web application
├── 📁 ml-model-api/ # Flask ML inference API
├── 📁 contracts/ # Soroban smart contracts
├── 📁 flavorsnap-food-registry/ # Rust-based food registry
├── 📁 dataset/ # Training and validation data
├── 📁 models/ # Trained model files
├── 📁 docs/ # Comprehensive documentation
├── � scripts/ # Utility and setup scripts
├── 📁 config/ # Configuration files
└── 📁 uploads/ # User uploaded images
- 🎨 Frontend: Next.js 15 with React 19, TypeScript, TailwindCSS
- 🧠 ML API: Flask with PyTorch ResNet18 model
- ⛓️ Blockchain: Soroban smart contracts on Stellar
- 📊 Analytics: Classification history and insights
- 🐳 Containers: Docker support for all environments
| Document | Purpose |
|---|---|
| Project Structure | Complete directory structure and organization |
| Blockchain Architecture | Decentralized governance and incentive design |
| Smart Contracts | Technical documentation for developer and user interactions |
| Development Workflow | Development process and guidelines |
| File Purposes | Detailed file responsibilities |
| Installation Guide | Comprehensive setup instructions |
| Configuration Guide | Configuration options and settings |
| Troubleshooting Guide | Common issues and solutions |
| Blockchain Integration | Role of Stellar and Soroban in the ecosystem |
- � Structure Analysis:
python scripts/analyze_structure.py - ⚡ Quick Setup:
python scripts/install.py - 🧪 Environment Check:
python scripts/check_environment.py - � Docker Management:
./scripts/docker_run.sh
For detailed information about project organization, file purposes, and development workflows, please refer to the comprehensive documentation in the docs/ directory.
- Framework: Next.js 15.3.3 with React 19
- Language: TypeScript 5
- Styling: TailwindCSS 4
- Icons: Lucide React
- State Management: React Hooks & Context
- HTTP Client: Axios/Fetch API
- Form Handling: React Hook Form
- Testing: Jest & React Testing Library
- i18n: next-i18next with RTL support
- Framework: PyTorch
- Architecture: ResNet18 (ImageNet pretrained)
- Image Processing: Pillow & torchvision
- Model Serving: FastAPI
- Inference: CPU-optimized for deployment
- API: FastAPI with RESTful endpoints
- Language: Python 3.8+
- File Storage: Local filesystem (configurable)
- Image Processing: Pillow, OpenCV
- Serialization: JSON
- Platform: Stellar/Soroban
- Language: Rust
- Smart Contracts: Model governance, incentives
- SDK: Soroban SDK v22.0.6
- Version Control: Git
- Package Manager: npm/yarn/pnpm
- Code Quality: ESLint, Prettier
- Containerization: Docker & Docker Compose
- CI/CD: GitHub Actions (planned)
- Python 3.8+ and pip (download)
- Node.js 18+ and npm/yarn (download)
- Git (download)
- 4GB+ RAM for model loading
- ~3GB disk space (PyTorch is large)
# Clone and install automatically
git clone https://github.com/olaleyeolajide81-sketch/flavorsnap.git
cd flavorsnap
python scripts/install.py
# Access the application
# Frontend: http://localhost:3000
# Backend API: http://localhost:5000# Clone and run with Docker
git clone https://github.com/olaleyeolajide81-sketch/flavorsnap.git
cd flavorsnap
./scripts/docker_run.sh -e development -d
# Access the application
# Frontend: http://localhost:3000
# Backend API: http://localhost:5000# Clone and setup everything
git clone https://github.com/olaleyeolajide81-sketch/flavorsnap.git
cd flavorsnap
npm run setup- 📖 Installation Guide - Comprehensive setup instructions
- 🔧 Troubleshooting Guide - Common issues and solutions
- ⚙️ Configuration Guide - Detailed configuration options
- 🧪 Environment Validation - Verify your setup
| Method | Best For | Command |
|---|---|---|
| Automated Script | Quick setup, all platforms | python scripts/install.py |
| Docker | Isolated environment | ./scripts/docker_run.sh -e development -d |
| Manual | Full control | npm run setup |
# Run environment validation
python scripts/check_environment.py
# Check service health
curl http://localhost:5000/health
# Test classification
curl -X POST http://localhost:5000/predict -F "image=@test-food.jpg"FlavorSnap provides comprehensive documentation to ensure smooth installation and setup across all platforms.
| Guide | Description | Platform |
|---|---|---|
| 📖 Installation Guide | Complete step-by-step installation instructions | All platforms |
| 🔧 Troubleshooting Guide | Common issues and solutions | All platforms |
| ⚙️ Configuration Guide | Detailed configuration options | All platforms |
| 🧪 Environment Validation | Automated environment checking | All platforms |
python scripts/install.py --help- Auto-detects platform and dependencies
- Supports Docker, manual, and hybrid installation
- Includes environment validation
- Platform-specific optimizations
./scripts/docker_run.sh -e development -d- Isolated environment
- Consistent across platforms
- Easy scaling and deployment
- Production-ready
npm run setup- Full control over setup
- Custom configurations
- Development-focused
- Educational value
| Platform | Docker | Manual | Auto Script | GPU Support |
|---|---|---|---|---|
| Windows 10+ | ✅ | ✅ | ✅ | ✅ |
| macOS 10.15+ | ✅ | ✅ | ✅ | ✅ |
| Ubuntu 18.04+ | ✅ | ✅ | ✅ | ✅ |
| Other Linux | ✅ |
- 🤖 Smart Detection: Automatically detects your platform and available tools
- 📦 Dependency Management: Installs missing dependencies automatically
- 🔧 Auto-Fix: Attempts to fix common configuration issues
- 🎮 GPU Setup: Optional GPU configuration for NVIDIA/AMD cards
- 📊 Validation: Comprehensive environment validation
- 📝 Logging: Detailed installation logs for debugging
- 🐳 Multi-Stage Builds: Optimized image sizes
- 🔒 Security: Non-root containers, minimal attack surface
- 📊 Monitoring: Built-in health checks and metrics
- 🌐 Cross-Platform: Works identically on all systems
- 🚀 Production Ready: Includes monitoring and scaling
- 🎛️ Full Control: Complete control over every component
- 📚 Educational: Learn how each component works
- 🔧 Customization: Easy to modify and extend
- 🛠️ Development: Optimized for development workflows
After installation, run the validation script to ensure everything is working:
# Basic validation
python scripts/check_environment.py
# Comprehensive check
python scripts/check_environment.py --all --verbose
# Auto-fix common issues
python scripts/check_environment.py --fixIf you encounter issues during installation:
- Check the Troubleshooting Guide: docs/troubleshooting.md
- Run Environment Validation:
python scripts/check_environment.py --all - Join our Community: Telegram Group
- Report Issues: GitHub Issues
- ✅ 95%+ Success Rate with automated script
- ✅ 5-Minute Average installation time
- ✅ All Major Platforms supported
- ✅ GPU Acceleration available
- ✅ Production Ready configurations
git clone https://github.com/your-username/flavorsnap.git
cd flavorsnapCreate and activate a virtual environment, then install all dependencies:
🪟 Windows (PowerShell)
python -m venv venv
venv\Scripts\Activate.ps1
pip install -r requirements.txt🪟 Windows (Command Prompt)
python -m venv venv
venv\Scripts\activate.bat
pip install -r requirements.txt🍎 macOS
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt🐧 Linux
# Ensure venv module is installed (Debian/Ubuntu)
sudo apt-get install python3-venv python3-dev gcc
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txtGPU Support (Optional): The default install is CPU-only. For NVIDIA GPU acceleration:
# CUDA 12.1 pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121See pytorch.org/get-started for the full matrix.
cd frontend
npm install
cp .env.example .env.local
# Edit .env.local with your configuration
npm run dev# From the project root, with venv activated
cd ml-model-api
python app.py- Frontend: http://localhost:3000
- API: http://localhost:5000
- API Health: http://localhost:5000/health
Create .env.local in the frontend directory:
# API Configuration
NEXT_PUBLIC_API_URL=http://localhost:5000
NEXT_PUBLIC_MODEL_ENDPOINT=/predict
# File Upload Settings
MAX_FILE_SIZE=10485760 # 10MB
ALLOWED_FILE_TYPES=jpg,jpeg,png,webp
# Model Configuration
MODEL_CONFIDENCE_THRESHOLD=0.6
ENABLE_CLASSIFICATION_HISTORY=true
# Feature Flags
ENABLE_ANALYTICS=false
ENABLE_DARK_MODE=true
# Development
NODE_ENV=development
DEBUG=trueAll Python commands assume the virtual environment is activated:
# Create virtual environment (only once)
python -m venv venv
# Activate (run every time you open a new terminal)
source venv/bin/activate # macOS / Linux
venv\Scripts\activate # Windows CMD
venv\Scripts\Activate.ps1 # Windows PowerShell
# Install all core dependencies
pip install -r requirements.txt
# For development (includes linting, testing, formatting)
pip install -r requirements-dev.txt
# Verify installation
python -c "import torch; print(f'PyTorch {torch.__version__} — GPU: {torch.cuda.is_available()}')"Deactivate the virtual environment when done:
deactivate
| File | Purpose |
|---|---|
requirements.txt |
Core runtime dependencies (torch, flask, pillow, etc.) |
requirements-dev.txt |
Dev tools (pytest, flake8, black, mypy) + core deps |
docs/dependencies.md |
Full dependency documentation with troubleshooting |
The trained model (model.pth) should be in the project root. If you want to train your own model:
jupyter notebook train_model.ipynb
# Follow the notebook instructionsFlavorSnap provides comprehensive Docker support for containerized development and deployment.
Dockerfile- Multi-stage production containerDockerfile.dev- Development backend containerDockerfile.frontend.dev- Development frontend containerdocker-compose.yml- Development environmentdocker-compose.prod.yml- Production environmentdocker-compose.test.yml- Testing environment.dockerignore- Docker ignore rules
# Start development containers
./scripts/docker_run.sh -e development -d
# Build images only
./scripts/docker_build.sh -e development
# Start with custom scaling
./scripts/docker_run.sh -e development --scale-frontend 2 --scale-backend 1
# View logs
docker-compose logs -f# Start production stack
./scripts/docker_run.sh -e production -d
# Build and push to registry
./scripts/docker_build.sh -e production --push
# Scale services
docker-compose -f docker-compose.prod.yml up --scale frontend=3 --scale backend=2# Run all tests
./scripts/docker_run.sh -e test
# Run specific test suites
docker-compose -f docker-compose.test.yml run --rm integration-tests
docker-compose -f docker-compose.test.yml run --rm e2e-tests- Hot Reloading: Live code changes
- Debug Mode: Enhanced logging
- Volume Mounts: Local file synchronization
- Development Tools: Testing, linting utilities
- Multi-stage Builds: Optimized image sizes
- Security Hardening: Non-root users, minimal packages
- Health Checks: Automated monitoring
- Resource Limits: Memory and CPU constraints
- Isolated Environment: Clean test execution
- Test Databases: Temporary data storage
- Coverage Reporting: Code quality metrics
- Performance Testing: Load and stress tests
Create .env file for Docker environments:
# Production Environment Variables
POSTGRES_DB=flavorsnap
POSTGRES_USER=flavorsnap
POSTGRES_PASSWORD=secure_password
REDIS_PASSWORD=redis_password
GRAFANA_PASSWORD=grafana_password
# Application Configuration
NODE_ENV=production
MODEL_CONFIDENCE_THRESHOLD=0.6
NEXT_PUBLIC_API_URL=http://backend:5000Production Docker setup includes:
- Prometheus: Metrics collection
- Grafana: Visualization dashboards
- Health Checks: Container monitoring
- Resource Limits: CPU/memory constraints
- Log Aggregation: Centralized logging
- Non-root Containers: Secure by default
- Minimal Base Images: Reduced attack surface
- Secret Management: Environment variable protection
- Network Isolation: Internal service communication
For orchestration with Kubernetes:
# Deploy to Kubernetes
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/monitoring.yaml
# Check deployment status
kubectl get pods -n flavorsnap
kubectl get services -n flavorsnapWe love contributions! Whether you're fixing bugs, adding features, or improving documentation, your help is appreciated.
git clone https://github.com/your-username/flavorsnap.git
cd flavorsnap# Python backend
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
pip install -r requirements-dev.txt
# Frontend
cd frontend && npm installgit checkout -b feature/amazing-feature- Follow the existing code style
- Add tests for new functionality
- Update documentation as needed
npm run test
npm run lint
npm run buildgit commit -m "feat: add amazing feature"
git push origin feature/amazing-feature- Provide clear description of changes
- Link relevant issues
- Include screenshots for UI changes
- TypeScript: Strict mode enabled
- React: Functional components with hooks
- CSS: TailwindCSS utility classes
- Python: PEP 8 compliant
- Rust: rustfmt formatting
Follow Conventional Commits:
feat:New featuresfix:Bug fixesdocs:Documentation changesstyle:Code formattingrefactor:Code refactoringtest:Test additionschore:Maintenance tasks
- Unit tests for all new functions
- Integration tests for API endpoints
- E2E tests for user workflows
- Minimum 80% code coverage
- Update README.md for new features
- Add/update tests
- Ensure CI/CD passes
- Request code review
- Merge after approval
- UI/UX improvements
- New components
- Performance optimizations
- Mobile responsiveness
- Accessibility features
- API enhancements
- Model optimization
- Security improvements
- Database integration
- Performance tuning
- Model architecture improvements
- New food categories
- Accuracy enhancements
- Training pipeline
- Model deployment
- API documentation
- Tutorials
- Examples
- Translation
- Video guides
FlavorSnap now exposes a FastAPI-based REST API with generated OpenAPI documentation.
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc - OpenAPI schema:
http://localhost:8000/openapi.json
Classify an uploaded food image with multipart form data.
Form fields:
| Field | Type | Required | Description |
|---|---|---|---|
image |
file | yes | JPEG, PNG, or WebP image |
resize |
int | no | Square resize target before inference. Default: 224 |
center_crop |
bool | no | Apply center crop after resize. Default: true |
normalize |
bool | no | Apply ImageNet normalization. Default: true |
top_k |
int | no | Number of ranked predictions to return. Default: 3 |
Request:
curl -X POST "http://localhost:8000/api/v1/classify" \
-F "image=@/path/to/food.jpg" \
-F "resize=256" \
-F "center_crop=true" \
-F "normalize=true" \
-F "top_k=3"Response:
{
"prediction": "Moi Moi",
"confidence": 0.91,
"predictions": [
{ "label": "Moi Moi", "confidence": 0.91 },
{ "label": "Akara", "confidence": 0.06 },
{ "label": "Bread", "confidence": 0.03 }
],
"preprocessing": {
"resize": 256,
"center_crop": true,
"normalize": true,
"top_k": 3
},
"processing_time_ms": 18.247,
"filename": "food.jpg",
"request_id": "4b3709df-4d1f-4cad-95f7-9e86b629f470"
}Error codes:
400: empty upload or invalid image payload413: file exceeds configured upload size415: unsupported content type429: rate limit exceeded500: model loading or inference failure
Check API health and model readiness.
Response:
{
"status": "ok",
"model_loaded": true,
"classes": ["Akara", "Bread", "Egusi", "Moi Moi", "Rice and Stew", "Yam"],
"startup_error": null
}Classify uploaded food image.
Request:
curl -X POST \
http://localhost:5000/predict \
-F 'image=@/path/to/food.jpg'Response:
{
"label": "Moi Moi",
"confidence": 85.7,
"all_predictions": [
{ "label": "Moi Moi", "confidence": 85.7 },
{ "label": "Akara", "confidence": 9.2 },
{ "label": "Bread", "confidence": 3.1 }
],
"processing_time": 0.234
}List predictions with pagination, filtering, and sorting.
Query parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
page |
int | 1 | Page number (offset-based) |
limit |
int | 20 | Items per page (max 100) |
cursor |
string | — | Opaque cursor for cursor-based pagination (from previous next_cursor) |
label |
string | — | Filter by label (exact or comma-separated list) |
confidence_min |
float | — | Minimum confidence (0–100) |
confidence_max |
float | — | Maximum confidence (0–100) |
created_after |
ISO datetime | — | Filter predictions after this time |
created_before |
ISO datetime | — | Filter predictions before this time |
sort_by |
string | created_at |
Sort field: created_at, label, confidence, id |
order |
string | desc |
Sort order: asc, desc |
Example (offset):
curl "http://localhost:5000/predictions?page=1&limit=20&sort_by=created_at&order=desc"Example (cursor):
curl "http://localhost:5000/predictions?cursor=eyJ...&limit=20"Response:
{
"predictions": [
{ "id": "uuid", "label": "Moi Moi", "confidence": 85.0, "created_at": "2025-02-23T12:00:00+00:00" }
],
"pagination": { "page": 1, "limit": 20, "total": 42, "total_pages": 3 },
"next_cursor": "base64...",
"prev_cursor": null,
"count": 20
}Check API health status.
Response:
{
"status": "healthy",
"model_loaded": true,
"version": "1.0.0"
}Get list of supported food classes.
Response:
{
"classes": ["Akara", "Bread", "Egusi", "Moi Moi", "Rice and Stew", "Yam"],
"count": 6
}{
"error": "Invalid image format",
"code": "INVALID_FILE_TYPE",
"message": "Only JPG, PNG, and WebP images are supported"
}# Frontend tests
cd frontend
npm run test
npm run test:coverage
npm run test:e2e
# Backend (FastAPI API + Panel UI) tests
# Runs only tests under `tests/` (see repo `pyproject.toml`).
python scripts/run_tests.py
# Same suite with coverage (enforces 90%+).
python scripts/coverage_report.py
# Performance benchmarks
python scripts/run_tests.py --performance-smoke
python scripts/run_tests.py --performance-fulltests/
├── 📁 api/ # FastAPI endpoint tests (existing)
├── 📁 unit/ # Core module unit tests
├── 📁 integration/ # API/UI integration tests
└── 📁 performance/ # pytest-benchmark performance checks
Most Python fixtures are generated deterministically at runtime (see tests/conftest.py).
Any on-disk lightweight assets live under tests/fixtures/.
- Base Model: ResNet18 (ImageNet pretrained)
- Input Size: 224x224 RGB images
- Output Classes: 6 Nigerian food categories
- Parameters: 11.7M total, 1.2M trainable
- Dataset: 2,400+ images (400 per class)
- Training Split: 80% train, 20% validation
- Epochs: 50 with early stopping
- Optimizer: Adam (lr=0.001)
- Accuracy: 94.2% validation accuracy
- Akara - Bean cake
- Bread - Various bread types
- Egusi - Melon seed soup
- Moi Moi - Bean pudding
- Rice and Stew - Rice with tomato stew
- Yam - Yam dishes
- Top-1 Accuracy: 94.2%
- Top-3 Accuracy: 98.7%
- Inference Time: ~200ms (CPU)
- Model Size: 44MB
# Check model path
ls -la model.pth
# Verify file integrity
python -c "import torch; print(torch.load('model.pth').keys())"# Clear cache
rm -rf .next node_modules
npm install
npm run build# Check if API is running
curl http://localhost:5000/health
# Verify CORS settings
curl -H "Origin: http://localhost:3000" http://localhost:5000/predict# Monitor memory usage
python -c "import torch; print(f'GPU Available: {torch.cuda.is_available()}')"
# Reduce batch size if neededEnable debug logging:
DEBUG=true
LOG_LEVEL=debug- Use WebP images for faster uploads
- Implement image compression on client-side
- Cache model predictions for similar images
- Use CDN for static assets
This project is licensed under the MIT License - see the LICENSE file for details.
- PyTorch for the deep learning framework
- Next.js for the React framework
- TailwindCSS for the styling framework
- Stellar/Soroban for blockchain integration
- The Nigerian food community for dataset contributions
- Telegram Group: Join our community
- GitHub Issues: Report bugs
- Email: support@flavorsnap.com