Skip to content

Latest commit

 

History

History
180 lines (129 loc) · 5.02 KB

File metadata and controls

180 lines (129 loc) · 5.02 KB

Duality Segmentation Models

This repository contains four semantic segmentation models: DeepLabV3+ with ResNet50 and EfficientNet backbones, a custom Attention U-Net, and a Segformer+DeepLabV3+ ensemble.

Project Structure

├── DeeplabV3plus + Resnet50/        # DeepLabV3+ with ResNet50 backbone
├── DeeplabV3plus+Efficientnet/      # DeepLabV3+ with EfficientNet backbone
├── Duality AI Seg Mod/               # Custom Attention U-Net model
└── OMEN_Segformer + DeeplabV3plus + Efficientnet/  # Ensemble model

⚠️ Important: Large Model Files

The trained model weights (.pth and .pt files) are not included in this repository due to their large size (100+ MB each). They are listed in .gitignore to keep the repository lightweight.

Accessing Model Files

Option 1: Download from Cloud Storage (Recommended)

If model files were uploaded to cloud storage during development:

# Example using Google Drive
gdown <drive-file-id> -O DeeplabV3plus\ +\ Resnet50/best_model.pth

# Or using a shared link
wget https://your-cloud-storage-link/best_model.pth -O DeeplabV3plus\ +\ Resnet50/best_model.pth

Option 2: Train Models from Scratch

Each model directory has a train.py script to retrain models:

cd "DeeplabV3plus + Resnet50"
python train.py

# Or for the Attention U-Net:
cd "Duality AI Seg Mod/submission_package"
python scripts/train.py

Option 3: Use Git LFS (Git Large File Storage)

For efficient versioning of large files:

# Install Git LFS
brew install git-lfs  # macOS
# sudo apt-get install git-lfs  # Linux

# Initialize LFS for .pth and .pt files
git lfs install
git lfs track "*.pth" "*.pt"
git add .gitattributes
git commit -m "Set up Git LFS for model files"

# Then add your model files
git add DeeplabV3plus\ +\ Resnet50/best_model.pth
git commit -m "Add trained models (LFS)"
git push

Setup Instructions

1. Clone Repository

git clone <your-repo-url>
cd "Duality Seg Mod"

2. Install Dependencies

Each model directory has its own requirements. Install all dependencies:

# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate  # macOS/Linux
# venv\Scripts\activate  # Windows

# Install dependencies for all models
pip install -r "DeeplabV3plus + Resnet50/requirements.txt"
pip install -r "DeeplabV3plus+Efficientnet/requirements.txt"
pip install -r "Duality AI Seg Mod/requirements.txt"
pip install -r "OMEN_Segformer + DeeplabV3plus + Efficientnet/requirements.txt"

Or use the combined requirements:

pip install -r requirements.txt  # if available at root

3. Download or Train Models

See "Accessing Model Files" section above.

4. Run Tests/Inference

# DeepLabV3+ models
cd "DeeplabV3plus + Resnet50"
python test.py

# Attention U-Net
cd "Duality AI Seg Mod/submission_package"
python scripts/predict.py

# Ensemble
cd "OMEN_Segformer + DeeplabV3plus + Efficientnet"
python test.py

Model Details

Model Backbone Location Best File
DeepLabV3+ ResNet50 DeeplabV3plus + Resnet50/ best_model.pth
DeepLabV3+ EfficientNet-B3 DeeplabV3plus+Efficientnet/ model/best.pth
Attention U-Net Custom Duality AI Seg Mod/ submission_package/runs/*/best_checkpoint.pt
Ensemble Segformer+DeepLabV3+ OMEN_Segformer + DeeplabV3plus + Efficientnet/ *.pth files

Performance Metrics

Test results and evaluation metrics are stored in:

  • runs/ directories with metrics.csv files
  • Individual model evaluation scripts provide detailed performance analysis

Expected Model File Locations

Once downloaded/trained, place model files in:

DeeplabV3plus + Resnet50/
├── best_model.pth
└── last_model.pth

DeeplabV3plus+Efficientnet/model/
└── best.pth

Duality AI Seg Mod/submission_package/runs/full_attention_unet_*/
├── best_checkpoint.pt
└── last_checkpoint.pt

OMEN_Segformer + DeeplabV3plus + Efficientnet/
├── deeplabv3plus.pth
└── segformer_efficientnet_b0.pth

Configuration

Each model has a config.py or config.yaml file. Key parameters:

  • Image size / input resolution
  • Batch size and learning rate
  • Number of epochs and checkpointing frequency
  • Data paths and preprocessing

Modify these before training or inference as needed.

Troubleshooting

"Model file not found" → Download or train the model (see above)

"ImportError: No module named..." → Install requirements: pip install -r requirements.txt

"CUDA out of memory" → Reduce batch size in config or use CPU (set in train/test scripts)

"Data loading errors" → Verify data paths in config.py or data_loader.py

Contributing

To add improvements:

  1. Create a branch: git checkout -b feature/your-feature
  2. Make changes (model files won't be committed due to .gitignore)
  3. Push and create a pull request

License

Specify your project license here.

Contact

Questions or issues? Create an issue or contact the maintainers.