This project provides an API and CLI for generating explainable model visualizations and narratives, including SHAP, LIME, and Integrated Gradients, to interpret and explain machine learning model predictions. It is containerized using Docker for easy deployment and reproducibility.
- Accepts structured input data and returns SHAP, LIME, or Integrated Gradients explanations
- Supports tabular, text, and image models (with extensibility for others)
- Dockerized for scalable and environment-agnostic deployment
- Saves plots to disk and serves results via API and CLI
- Extensible: plug in your own models and explanation methods
- Batch and single-row explanation support
- Human-friendly, narrative-rich JSON and HTML report outputs
- Python 3.10+
- FastAPI
- SHAP, LIME, Captum (Integrated Gradients)
- Scikit-learn / XGBoost / LightGBM / PyTorch / TensorFlow (optional)
- Matplotlib / Plotly
- Docker & Docker Compose
git clone https://github.com/your-username/shap-explainer-backend.git
cd shap-explainer-backendpip install -r requirements.txtuvicorn main:app --reloadpython cli_explain.py --model <model_file> --data <data_file> --method shap|lime|integrated_gradientsThis project is licensed under the MIT License. See the LICENSE file for details.