A full-stack web application for predicting customer churn using machine learning models (XGBoost). Built with FastAPI, Docker, and a modern web interface.
- FastAPI Backend: High-performance API for churn prediction
- Machine Learning Models: XGBoost-based teacher and risk models
- Modern Web Interface: Beautiful, responsive UI for predictions
- Docker Support: Deployment with Docker
- Cloud Ready: Deploy to any cloud platform
- Python 3.11+
- Docker and Docker Compose (for containerized deployment)
- model file:
telco_churn_models.pkl
Place telco_churn_models.pkl file in the project root directory. The model file contains:
xgb_teacher: Teacher model for churn predictionxgb_risk: Risk model for churn predictionfeature_names: List of feature names expected by the model
# Install dependencies
pip install -r requirements.txt
# Run the application
uvicorn main:app --reload --host 0.0.0.0 --port 8000Access the application at: http://localhost:8000
# Build and run with Docker Compose
docker-compose up --build
# Or build and run with Docker directly
docker build -t churn-api .
docker run -p 8000:8000 -v $(pwd)/telco_churn_models.pkl:/app/telco_churn_models.pkl churn-apiAccess the application at: http://localhost:8000
- Install Google Cloud SDK
- Build and push Docker image:
gcloud builds submit --tag gcr.io/YOUR_PROJECT_ID/churn-api
gcloud run deploy churn-api --image gcr.io/YOUR_PROJECT_ID/churn-api --platform managed --region us-central1 --allow-unauthenticated- Get URL: Google Cloud Run provides a public URL
- Launch an EC2 instance (Ubuntu )
- SSH into instance
- Install Docker:
sudo apt-get update
sudo apt-get install docker.io docker-compose -y- Clone repository:
git clone REPO_URL
cd customer-churn-api- Copy model file to the instance
- Run with Docker:
docker-compose up -d- Configure security group to allow port 8000
- Get public IP from EC2 dashboard
- Sign up at DigitalOcean
- Create App → Connect GitHub
- Configure:
- Type: Web Service
- Dockerfile path:
Dockerfile - Port: 8000
- Add model file as a static asset
- Deploy: DigitalOcean provides a public URL
Returns the web interface for predictions.
Predicts customer churn based on input data.
Request Body:
{
"data": {
"feature1": value1,
"feature2": value2,
...
}
}Response:
{
"teacher_model_prediction": 0,
"risk_model_prediction": 0.1234,
"status": "success"
}customer-churn-api/
├── main.py # FastAPI application
├── requirements.txt # Python dependencies
├── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose configuration
├── telco_churn_models.pkl # Your ML model (add this file)
├── templates/
│ └── index.html # Web interface
├── static/
│ ├── style.css # Styling
│ └── script.js # Frontend JavaScript
└── README.md # This file
set the model path using environment variable:
export MODEL_PATH=/path/to/your/model.pkl- Ensure
telco_churn_models.pklis in the project root - Check file permissions
- Verify the file path in Docker volumes
- Change the port in
docker-compose.ymlor Docker run command - Update the port mapping:
"8080:8000"instead of"8000:8000"
- Ensure all files are in the correct directories
- Check that
requirements.txtis present - Verify Docker is running
- The model file (
telco_churn_models.pkl) is excluded from git (see.gitignore) - Make sure to upload model file to cloud platform
- For production, consider adding authentication and rate limiting
- The application runs on port 8000 by default
For issues or questions, please check:
- Model file is correctly placed
- All dependencies are installed
- Docker is properly configured
- Ports are accessible