This repository contains an Explainable AI (XAI) model developed for the classification of encrypted network traffic using a combination of machine learning and deep learning techniques. The model leverages SHAP and LIME to interpret and visualize the contribution of features during classification.
Encrypted traffic classification is essential for modern cybersecurity systems to detect and monitor potential threats while preserving user privacy. This project:
- Trains a hybrid model to detect types of encrypted traffic (e.g., Tor, VPN, I2P)
- Applies SHAP and LIME to explain model predictions
- Demonstrates visualizations and interpretable results
- Binary and multi-class classification support
- Model interpretability using SHAP & LIME
- Lightweight feature preprocessor
- Ready for plug-and-play with your own dataset
data/ # Dataset samples or links models/ # Trained model files explainability/ # SHAP, LIME scripts and results src/ # Core Python scripts notebooks/ # Jupyter notebooks results/ # Output results and figures
pip install -r requirements.txt
python src/train.py
📸 Screenshots
🔹 Streamlit Dashboard
Upload CSV, toggle between Binary/Multi-Class, view results in real time.
🔹 SHAP Summary Plot – Binary
Global feature importance from SHAP.
🔹 SHAP Summary Plot – Multi-Class
Multi-class SHAP visualization.
🔹 LIME Local Explanation
Per-instance interpretability via LIME.
### 📸 Screenshots
#### 🔹 Binary Confusion Matrix

#### 🔹 SHAP Summary – Binary

#### 🔹 SHAP Summary – Multi-Class

#### 🔹 LIME Explanation – Binary Instance
(Open the `.html` file to interact)
➡️ [`lime_binary_instance.html`](results/lime_binary_instance.html)
#### 🔹 LIME Explanation – Multi-Class Instance
➡️ [`lime_multi_instance.html`](results/lime_multi_instance.html)