This repository contains four semantic segmentation models: DeepLabV3+ with ResNet50 and EfficientNet backbones, a custom Attention U-Net, and a Segformer+DeepLabV3+ ensemble.
├── DeeplabV3plus + Resnet50/ # DeepLabV3+ with ResNet50 backbone
├── DeeplabV3plus+Efficientnet/ # DeepLabV3+ with EfficientNet backbone
├── Duality AI Seg Mod/ # Custom Attention U-Net model
└── OMEN_Segformer + DeeplabV3plus + Efficientnet/ # Ensemble model
The trained model weights (.pth and .pt files) are not included in this repository due to their large size (100+ MB each). They are listed in .gitignore to keep the repository lightweight.
If model files were uploaded to cloud storage during development:
# Example using Google Drive
gdown <drive-file-id> -O DeeplabV3plus\ +\ Resnet50/best_model.pth
# Or using a shared link
wget https://your-cloud-storage-link/best_model.pth -O DeeplabV3plus\ +\ Resnet50/best_model.pthEach model directory has a train.py script to retrain models:
cd "DeeplabV3plus + Resnet50"
python train.py
# Or for the Attention U-Net:
cd "Duality AI Seg Mod/submission_package"
python scripts/train.pyFor efficient versioning of large files:
# Install Git LFS
brew install git-lfs # macOS
# sudo apt-get install git-lfs # Linux
# Initialize LFS for .pth and .pt files
git lfs install
git lfs track "*.pth" "*.pt"
git add .gitattributes
git commit -m "Set up Git LFS for model files"
# Then add your model files
git add DeeplabV3plus\ +\ Resnet50/best_model.pth
git commit -m "Add trained models (LFS)"
git pushgit clone <your-repo-url>
cd "Duality Seg Mod"Each model directory has its own requirements. Install all dependencies:
# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate # macOS/Linux
# venv\Scripts\activate # Windows
# Install dependencies for all models
pip install -r "DeeplabV3plus + Resnet50/requirements.txt"
pip install -r "DeeplabV3plus+Efficientnet/requirements.txt"
pip install -r "Duality AI Seg Mod/requirements.txt"
pip install -r "OMEN_Segformer + DeeplabV3plus + Efficientnet/requirements.txt"Or use the combined requirements:
pip install -r requirements.txt # if available at rootSee "Accessing Model Files" section above.
# DeepLabV3+ models
cd "DeeplabV3plus + Resnet50"
python test.py
# Attention U-Net
cd "Duality AI Seg Mod/submission_package"
python scripts/predict.py
# Ensemble
cd "OMEN_Segformer + DeeplabV3plus + Efficientnet"
python test.py| Model | Backbone | Location | Best File |
|---|---|---|---|
| DeepLabV3+ | ResNet50 | DeeplabV3plus + Resnet50/ |
best_model.pth |
| DeepLabV3+ | EfficientNet-B3 | DeeplabV3plus+Efficientnet/ |
model/best.pth |
| Attention U-Net | Custom | Duality AI Seg Mod/ |
submission_package/runs/*/best_checkpoint.pt |
| Ensemble | Segformer+DeepLabV3+ | OMEN_Segformer + DeeplabV3plus + Efficientnet/ |
*.pth files |
Test results and evaluation metrics are stored in:
runs/directories withmetrics.csvfiles- Individual model evaluation scripts provide detailed performance analysis
Once downloaded/trained, place model files in:
DeeplabV3plus + Resnet50/
├── best_model.pth
└── last_model.pth
DeeplabV3plus+Efficientnet/model/
└── best.pth
Duality AI Seg Mod/submission_package/runs/full_attention_unet_*/
├── best_checkpoint.pt
└── last_checkpoint.pt
OMEN_Segformer + DeeplabV3plus + Efficientnet/
├── deeplabv3plus.pth
└── segformer_efficientnet_b0.pth
Each model has a config.py or config.yaml file. Key parameters:
- Image size / input resolution
- Batch size and learning rate
- Number of epochs and checkpointing frequency
- Data paths and preprocessing
Modify these before training or inference as needed.
"Model file not found" → Download or train the model (see above)
"ImportError: No module named..." → Install requirements: pip install -r requirements.txt
"CUDA out of memory" → Reduce batch size in config or use CPU (set in train/test scripts)
"Data loading errors" → Verify data paths in config.py or data_loader.py
To add improvements:
- Create a branch:
git checkout -b feature/your-feature - Make changes (model files won't be committed due to
.gitignore) - Push and create a pull request
Specify your project license here.
Questions or issues? Create an issue or contact the maintainers.