Skip to content

jaypatel342005/Brain-Tumors-CNN

Repository files navigation

🧠 NeuroScan AI – Brain Tumor Classifier

Deep Learning Pipeline for MRI-Based Brain Tumor Classification

Python PyTorch TorchVision Kaggle License: MIT Stars

An end-to-end deep learning pipeline for automated brain tumor classification from MRI scans — benchmarking Custom CNN, ResNet18, and EfficientNet-V2-S using PyTorch.

🌍 Live Demo📓 Kaggle Notebook📂 GitHub💬 Contact


📋 Quick Navigation

Section Link
Overview 🌟 Jump to Overview
Dataset & Classes 📂 Jump to Dataset
Model Architectures 🤖 Jump to Models
Results 📊 Jump to Results
Getting Started 🚀 Jump to Getting Started
Project Structure 🗂 Jump to Structure
Tech Stack 🛠️ Jump to Tech Stack

🌟 Overview

NeuroScan AI is a production-quality deep learning project for automated multi-class brain tumor classification from MRI images — complete with a live web demo for real-time predictions.

  • 🧬 Full ML Pipeline – Preprocessing, augmentation, training, evaluation, and inference
  • 🤖 3 Architectures Benchmarked – Custom CNN, ResNet18, and EfficientNet-V2-S
  • 🏆 ~98% Test Accuracy – via EfficientNet-V2-S transfer learning
  • 🌐 Live Web App – Try it now at neuralscanai.vercel.app
  • 📓 Reproducible – Full Kaggle notebook with free GPU (T4) runtime
  • 📊 Rich Evaluation – Confusion matrices, F1-scores, training curves per model

⚠️ Disclaimer: This project is for educational and research purposes only. It is not a certified medical device and must not be used for clinical diagnosis. Always consult qualified healthcare professionals.


🎯 Features at a Glance

🧮 Machine Learning

  • ✅ End-to-end image preprocessing pipeline
  • ✅ 3 deep learning architectures compared
  • ✅ Transfer learning from ImageNet weights
  • ✅ Data augmentation (flip, rotate, color jitter)
  • ✅ Confusion matrix & per-class F1 analysis
  • ✅ GPU-accelerated training (CUDA)
  • ✅ Reproducible random seed setup

🌐 Web Application

  • ✅ Upload MRI scan for instant prediction
  • ✅ Real-time class + confidence output
  • ✅ Probability score for all 4 classes
  • ✅ Responsive, mobile-friendly UI
  • ✅ Hosted on Vercel (globally accessible)
  • ✅ Fast inference (< 100ms)
  • ✅ No setup required — just open and use

🧠 How It Works

┌──────────────────────────────────────────────────────────────┐
│  1. DATA LOADING                                             │
│  • MRI Dataset: ~7,023 images across 4 classes              │
│  • ImageFolder auto-labels from folder structure            │
│  • Train / Test split preserved from dataset source         │
└────────────────────┬─────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────────────────────┐
│  2. PREPROCESSING & AUGMENTATION                             │
│  • Resize all images to 224×224                             │
│  • Augment training: flip, rotate, color jitter            │
│  • Normalize with ImageNet mean & std                       │
└────────────────────┬─────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────────────────────┐
│  3. MODEL TRAINING                                           │
│  • Baseline  : Custom CNN (trained from scratch)             │
│  • Transfer  : ResNet18 (ImageNet pretrained)                │
│  • Transfer  : EfficientNet-V2-S (ImageNet pretrained)       │
└────────────────────┬─────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────────────────────┐
│  4. EVALUATION                                               │
│  • Accuracy, Precision, Recall, F1-Score per class          │
│  • Confusion matrix heatmaps for each model                 │
│  • Training & validation loss/accuracy curves               │
└────────────────────┬─────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────────────────────┐
│  5. INFERENCE (Web App + Local Script)                       │
│  • Upload MRI → preprocess → predict class + confidence     │
│  • Live at: https://neuralscanai.vercel.app/                │
└──────────────────────────────────────────────────────────────┘

📂 Dataset & Classes

📊 Dataset Overview

Property Value
Source Brain Tumor MRI Dataset – Kaggle
Total Images ~7,023 MRI scans
Classes 4
Format JPEG / PNG
Input Size Resized to 224 × 224
Split Training / Testing (folder-based)

🔍 Tumor Classes

Class Label Description Typical Characteristics
🔴 Glioma glioma Tumor arising from glial cells Aggressive, irregular borders
🟠 Meningioma meningioma Tumor in the meninges (brain lining) Slow-growing, well-defined mass
🟡 Pituitary pituitary Tumor in the pituitary gland Located at base of skull
🟢 No Tumor notumor Healthy brain scan No abnormal mass detected

🔬 Preprocessing Pipeline

# ─── Training transforms (with augmentation) ───────────────────
train_transforms = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.RandomHorizontalFlip(p=0.5),
    transforms.RandomRotation(degrees=15),
    transforms.ColorJitter(brightness=0.2, contrast=0.2),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                          std=[0.229, 0.224, 0.225])   # ImageNet stats
])

# ─── Validation / Test transforms (no augmentation) ────────────
val_transforms = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                          std=[0.229, 0.224, 0.225])
])

🤖 Model Architectures

1. 🔷 Custom CNN (Baseline)

A hand-designed convolutional network trained entirely from scratch — establishes the performance floor.

Input: [B, 3, 224, 224]
  → Conv2d(3→32,   3×3) + BatchNorm + ReLU + MaxPool(2×2)
  → Conv2d(32→64,  3×3) + BatchNorm + ReLU + MaxPool(2×2)
  → Conv2d(64→128, 3×3) + BatchNorm + ReLU + MaxPool(2×2)
  → Conv2d(128→256,3×3) + BatchNorm + ReLU + MaxPool(2×2)
  → AdaptiveAvgPool2d(1, 1)
  → Flatten → Dropout(p=0.5) → Linear(256 → 4)
Output: [B, 4]  (raw logits)

2. 🔶 ResNet18 (Transfer Learning)

Pretrained ResNet18 with residual skip connections, fine-tuned for 4-class tumor classification.

import torchvision.models as models
import torch.nn as nn

model = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1)

# Replace the final fully-connected head
model.fc = nn.Sequential(
    nn.Dropout(p=0.4),
    nn.Linear(512, 4)         # 512 → 4 tumor classes
)

Why ResNet18?

  • Residual (skip) connections prevent vanishing gradients in deep networks
  • Compact at 11M parameters — fast and efficient to fine-tune
  • Rich ImageNet features transfer strongly to medical imaging tasks

3. 🟣 EfficientNet-V2-S (Transfer Learning) ⭐ Best

EfficientNet-V2-S uses Fused-MBConv blocks and progressive learning — a major upgrade over the original EfficientNet family in both accuracy and training speed.

from torchvision.models import efficientnet_v2_s, EfficientNet_V2_S_Weights
import torch.nn as nn

model = efficientnet_v2_s(weights=EfficientNet_V2_S_Weights.IMAGENET1K_V1)

# Replace classifier head for 4-class output
model.classifier = nn.Sequential(
    nn.Dropout(p=0.4),
    nn.Linear(1280, 4)        # 1280 → 4 tumor classes
)

Why EfficientNet-V2-S?

  • Fused-MBConv blocks significantly accelerate training vs V1
  • Progressive learning strategy improves generalization on smaller datasets
  • Better accuracy than ResNet18 with a superior parameter-to-accuracy ratio
  • Strong pretrained ImageNet features for fine-grained image classification

📊 Results & Performance

🏆 Model Comparison

Rank Architecture Parameters Train Acc Val Acc Test Acc F1-Score
🥇 EfficientNet-V2-S 21.5M ~99% ~98% ~98% ~0.98
🥈 ResNet18 11M ~99% ~97% ~97% ~0.97
🥉 Custom CNN ~2M ~90% ~88% ~88% ~0.87

🏆 EfficientNet-V2-S achieves the highest classification accuracy — the optimal choice for both performance and real-world deployment.

🎯 Final EfficientNet-V2-S Metrics

📈 KEY METRICS (EfficientNet-V2-S on Test Set)
├─ Overall Accuracy : ~98%
├─ Macro F1-Score   : ~0.98
├─ Macro Precision  : ~0.98
└─ Macro Recall     : ~0.97

🔬 Per-Class Report — EfficientNet-V2-S

Class Precision Recall F1-Score
Glioma 0.98 0.97 0.97
Meningioma 0.95 0.96 0.95
No Tumor 0.99 0.99 0.99
Pituitary 0.99 0.98 0.99

⚙️ Training Configuration

EPOCHS      = 20
BATCH_SIZE  = 32
LR          = 1e-4
OPTIMIZER   = Adam(model.parameters(), lr=LR)
SCHEDULER   = StepLR(optimizer, step_size=7, gamma=0.1)
CRITERION   = CrossEntropyLoss()
DEVICE      = "cuda" if torch.cuda.is_available() else "cpu"
SEED        = 42    # for reproducibility

Why EfficientNet-V2-S Won

Criterion EfficientNet-V2-S ResNet18 Custom CNN
Test Accuracy ~98% ~97% ~88%
Pretrained ✅ ImageNet ✅ ImageNet ❌ Scratch
Fused-MBConv ✅ Yes ❌ No ❌ No
Progressive Learning ✅ Yes ❌ No ❌ No
Inference Speed ⚡ Fast ⚡ Fast ⚡⚡ Fastest

Decision: EfficientNet-V2-S selected for best accuracy and generalization.


🛠️ Tech Stack

Deep Learning & Computer Vision

Python PyTorch TorchVision

Data & Visualization

NumPy pandas Matplotlib Seaborn scikit-learn

Deployment & Tools

Vercel Jupyter Kaggle GitHub


📦 Installation

✅ Prerequisites

  • Python 3.8+ (Download)
  • pip package manager
  • GPU recommended (CUDA-compatible) — or use Kaggle free T4 GPU
  • Git for version control

Step 1️⃣ – Clone Repository

git clone https://github.com/jaypatel342005/Brain-Tumors-CNN.git
cd Brain-Tumors-CNN

Step 2️⃣ – Setup Python Environment

# Create and activate virtual environment
python -m venv venv
source venv/bin/activate        # macOS / Linux
venv\Scripts\activate           # Windows

# Install all dependencies
pip install -r requirements.txt

Step 3️⃣ – Verify Installation

python -c "import torch; print(f'PyTorch {torch.__version__} | CUDA: {torch.cuda.is_available()}')"

requirements.txt

torch>=2.0.0
torchvision>=0.15.0
numpy>=1.23.0
pandas>=1.5.0
matplotlib>=3.6.0
seaborn>=0.12.0
scikit-learn>=1.1.0
Pillow>=9.0.0
tqdm>=4.64.0

🚀 Getting Started

🌐 Option A: Live Demo (No Setup Required)

👉 neuralscanai.vercel.app — Upload an MRI scan and get an instant prediction with confidence scores.

📓 Option B: Kaggle Notebook (Free GPU)

👉 Open on Kaggle — Fork and run the full pipeline with a free T4 GPU. No local setup needed.

💻 Option C: Run Locally via Jupyter

jupyter notebook
# Open and run notebooks in order:
# 1. data_exploration.ipynb    → EDA & class distribution
# 2. model_training.ipynb      → Train all 3 architectures
# 3. evaluation.ipynb          → Metrics & confusion matrices
# 4. inference.ipynb           → Predict on new MRI scans

🔮 Option D: Local Inference Script

import torch
from PIL import Image
import torchvision.transforms as transforms

CLASS_NAMES = ['Glioma', 'Meningioma', 'No Tumor', 'Pituitary']

def predict(image_path: str, model, device) -> dict:
    """Predict tumor class and confidence from a single MRI image."""
    transform = transforms.Compose([
        transforms.Resize((224, 224)),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406],
                             [0.229, 0.224, 0.225])
    ])

    image  = Image.open(image_path).convert("RGB")
    tensor = transform(image).unsqueeze(0).to(device)

    model.eval()
    with torch.no_grad():
        probs    = torch.softmax(model(tensor), dim=1).squeeze()
        pred_idx = probs.argmax().item()

    return {
        "prediction" : CLASS_NAMES[pred_idx],
        "confidence" : f"{probs[pred_idx]*100:.2f}%",
        "all_probs"  : {cls: f"{p*100:.2f}%" for cls, p in zip(CLASS_NAMES, probs)}
    }

# ─── Example usage ──────────────────────────────────────────────
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
result = predict("mri_scan.jpg", model, device)

print(f"Prediction : {result['prediction']}")   # → Pituitary
print(f"Confidence : {result['confidence']}")   # → 97.43%
print(f"All Probs  : {result['all_probs']}")

🗂 Project Structure

Brain-Tumors-CNN/
│
├── 📁 backend/                                   # Backend server (API & model inference)
│
├── 🌐 frontend/                                  # Frontend web application (Vercel)
│
├── 📓 BrainTumorCNN(new).ipynb                   # ⭐ Latest updated training notebook
├── 📓 BrainTumorCNN.ipynb                        # Original training & evaluation notebook
├── 📓 Brain_Tumor_CNN_Blueprint.ipynb            # Architecture planning & design notebook
│
├── 📄 EfficientNet-V2-S.pdf                      # EfficientNet-V2-S research paper reference
│
├── 🖼️ Gemini_Generated_Image.png                 # AI-generated project illustration
├── 🖼️ MRI_of_Human_Brain.jpg                     # Sample MRI scan for demo / testing
│
├── 🤖 best_model.pth                             # ⭐ Best trained model weights (EfficientNet-V2-S)
│
├── 🔧 render.yaml                                # Render.com deployment configuration
├── 🔗 .gitignore                                 # Git ignore rules
└── ✨ README.md                                  # This file

📈 Insights & Future Work

✅ What Worked Well

  1. Transfer Learning Dominates — Both pretrained models vastly outperformed the custom CNN, confirming ImageNet features generalize strongly to medical imaging.
  2. EfficientNet-V2-S's Fused-MBConv — The upgraded block design accelerated training convergence while achieving the highest test accuracy.
  3. Data Augmentation — Significantly reduced overfitting for the custom CNN and improved generalization across all models.
  4. Minimal Overfitting — The ~1% gap between train and test accuracy for EfficientNet-V2-S confirms excellent generalization.

🚀 Future Directions

  • 🔍 Grad-CAM — Heatmap visualizations to show which MRI regions the model focuses on
  • 🤗 Vision Transformer (ViT) — Global attention-based classification for comparison
  • 📦 ONNX Export — Lightweight cross-platform deployment of the trained model
  • 🌐 Streamlit / Gradio Demo — Interactive local web app for offline inference
  • 🧪 3D CNN — Volumetric analysis using stacked MRI slices for richer spatial context

🤝 Contributing

Contributions are welcome! Here's how to get started:

1. Fork & Clone

git clone https://github.com/YOUR_USERNAME/Brain-Tumors-CNN.git
cd Brain-Tumors-CNN

2. Create a Feature Branch

git checkout -b feature/grad-cam-visualization

3. Commit Your Changes

git add .
git commit -m "Add: Grad-CAM saliency map visualization"

4. Push & Open a Pull Request

git push origin feature/grad-cam-visualization

Then open a PR on GitHub with a clear description of what changed and why.

Contribution ideas: Grad-CAM · ViT/Swin Transformer baseline · ONNX export · Streamlit demo · 3D CNN support


📜 License

This project is licensed under the MIT License — see the LICENSE file for details.

  • ✅ Use commercially · ✅ Modify & distribute · ✅ Use privately
  • ⚠️ Must include original license & copyright notice

📞 Contact & Support

Get in Touch

Jay Patel — Deep Learning Engineer | Computer Vision Enthusiast

📧 Email: pateljay97378@gmail.com 💼 GitHub: @jaypatel342005 📊 Kaggle: @jaypatel345

Have questions? Open an Issue on GitHub!


⭐ Show Your Support

If this project helped you, please:

  • Star this repository on GitHub
  • 🔗 Share with your network
  • 📢 Contribute via PRs or ideas
  • 💬 Give feedback via issues

Made with ❤️ to Advance Healthcare AI · NeuroScan AI © 2024 | All Rights Reserved

Profile Views

About

🧠 Deep learning pipeline to detect & classify brain tumors from MRI scans with ~98% accuracy | PyTorch · ResNet18 · EfficientNet-V2-S

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors