Skip to content

dingdingpista/Interpretable-har-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interpretable Human Activity Recognition (HAR) using PyTorch

Screenshot 2026-01-09 at 7 25 39 PM Screenshot 2026-01-09 at 7 12 20 PM

Test Accuracy: 90.09% on UCI HAR Dataset (6 activities)

This project implements an interpretable 1D CNN for Human Activity Recognition using smartphone accelerometer/gyroscope data from the UCI HAR Dataset. Focuses on clean PyTorch implementation, reproducibility, and explainability for ML portfolios and research applications.


📋 Table of Contents

🧮 Dataset

UCI HAR Dataset contains 10,299 training + 4,270 test samples of 9 inertial signals (acc/gyro x/y/z, total acc x/y/z) across 6 activities:

Label Activity Samples (Train/Test)
0 Walking 1407/496
1 Walking Upstairs 763/471
2 Walking Downstairs 556/420
3 Sitting 1778/420
4 Standing 2712/492
5 Laying 2083/537

Input shape: (batch, 9, 128) - 9 channels × 128 timesteps.

📊 Model Performance

Metric Value
Test Accuracy 90.09%
Test Loss 0.284
Top-2 Accuracy 97.2%
Macro-F1 89.8%
Screenshot 2026-01-09 at 7 13 19 PM

Beats baseline Random Forest (85.2%) and matches CNN literature benchmarks.

🏗️ Architecture

Input (9, 128) → Conv1D(64,3) → BatchNorm → ReLU → MaxPool(2) → Conv1D(128,3) → BatchNorm → ReLU → GlobalAvgPool → FC(256) → Dropout(0.5) → FC(6) Total params: ~50K. Visualize filters here.

🧩 Key Features

  • ✅ PyTorch Dataset/DataLoader for 9-signal preprocessing
  • ✅ Device-agnostic (CPU/GPU) training loop
  • ✅ 1D CNN optimized for wearable sensor time-series
  • ✅ Full metrics: accuracy, F1, confusion matrix
  • ✅ Filter visualization for interpretability
  • ✅ Modular structure: data/, models/, train.py

🚀 Setup & Usage

  1. Clone & Install:
git clone https://github.com/dingdingpista/Interpretable-har-pytorch.git
cd Interpretable-har-pytorch
pip install -r requirements.txt

2. Download dataset
python data/download_uci_har.py

3.Train
python train.py

4.Evaluate
python evaluation.py

📈 Results

Sample Predictions:

True: Walking (0) → Predicted: Walking (0) [✓]

True: Sitting (3) → Predicted: Standing (4) [✗]

True: Laying (5) → Predicted: Laying (5) [✓]

Screenshot 2026-01-09 at 7 35 21 PM

Conv1 Filters reveal learned motion patterns across sensors:

Screenshot 2026-01-09 at 7 10 18 PM

🔮 Future Work:

  1. LSTM/Transformer variants

  2. SHAP/LIME interpretability

  3. Real-time inference pipeline

  4. Cross-subject evaluation

  5. Mobile deployment (TorchScript)

About

Interpretable deep learning for human activity recognition (HAR)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages