An end-to-end NLP web app that classifies product reviews as positive or negative in real time using a scikit-learn pipeline, MLflow tracking, a FastAPI backend, and a modern vanilla frontend.
- NLP pipeline β text cleaning, TF-IDF vectorization, and Logistic Regression for binary sentiment classification
- Experiment tracking β MLflow logs parameters, metrics, and model artifacts for each run
- REST API β FastAPI backend with
/predictand/healthendpoints - Modern UI β responsive dark interface built with plain HTML, CSS, and JavaScript
- Docker ready β single-container deployment with a Render-friendly
PORTsetup - Render ready β easy cloud deployment with a stable public URL
| Layer | Technology |
|---|---|
| Model | scikit-learn Β· LogisticRegression Β· TF-IDF |
| Tracking | MLflow |
| API | FastAPI Β· Uvicorn |
| Frontend | Vanilla HTML Β· CSS Β· JavaScript |
| Container | Docker |
| Deploy | Render |
sentiment-radar/
βββ assets/
β βββ website.png # Frontend screenshot used in this README
βββ data/ # Local dataset and SQLite database files
βββ mlruns/ # MLflow run artifacts
βββ src/
β βββ api.py # FastAPI app and prediction endpoints
β βββ config.py # Paths and environment configuration
β βββ db.py # SQLAlchemy models and SQLite engine
β βββ seed_db.py # Dataset seeding helper
β βββ train.py # Training pipeline + MLflow logging
β βββ static/
β βββ index.html # Frontend entry point
β βββ styles.css # UI styles
β βββ app.js # Frontend logic
βββ Dockerfile
βββ requirements.txt
βββ README.md
docker build -t sentiment-radar .
docker run -p 10000:10000 sentiment-radarOpen:
http://localhost:10000
pip install -r requirements.txtThen start the API:
set PYTHONPATH=src
uvicorn api:app --host 0.0.0.0 --port 8000 --app-dir srcOpen:
http://localhost:8000
If you open the frontend with Live Server, keep the API running separately on port
8000or10000.
- Reviews are stored in the local SQLite database.
src/train.pyloads the data and trains a text classification pipeline.- MLflow logs the experiment run and model artifact.
src/api.pyloads the latest trained model and serves predictions through the API.
set PYTHONPATH=src
python src/train.py- Push the repository to GitHub.
- Go to render.com and create a New Web Service.
- Connect your GitHub repository.
- Select Docker as the environment.
- Deploy the service.
- Render will assign a public URL like
https://your-app.onrender.com.
The container is configured to respect the
PORTenvironment variable, which is the safest setup for Render.
| Method | Endpoint | Description |
|---|---|---|
GET |
/ |
Serves the frontend |
GET |
/health |
Returns API and model status |
POST |
/predict |
Returns the predicted sentiment |
{
"text": "I absolutely love this phone. The screen is sharp and the battery lasts all day."
}{
"sentiment": "Positive",
"confidence": 0.97
}The UI is intentionally minimal, modern, and responsive. It includes:
- a clear hero section
- a live prediction card
- sample review buttons
- confidence visualization
- green/red animated feedback based on the prediction
