Real-time detection and confidence mapping of safe landing zones for autonomous UAVs
Demo • Features • Dataset • Architecture • Quick Start
AEGIS-NET is an AI-powered system that analyzes aerial imagery from UAVs to identify safe landing zones in real-time. Using advanced semantic segmentation with uncertainty estimation, the system generates confidence heatmaps that clearly visualize:
- 🟢 Green zones: Safe for landing (high confidence)
- 🔴 Red zones: Unsafe terrain (low confidence)
- 🔵 Blue intensity: Model uncertainty
| Feature | Description |
|---|---|
| YOLOv8-Nano Backbone | Lightweight, fast inference suitable for edge deployment |
| Test-Time Augmentation | Multi-scale, multi-flip inference for robust uncertainty estimation |
| Superpixel Smoothing | SLIC-based label refinement for coherent safety zones |
| Custom Loss Function | BCE + variance penalty for stable predictions in flat regions |
| Real-time Heatmaps | RGB overlay visualization with confidence mapping |
| Gradio Demo | Interactive web interface for live demonstrations |
This project uses the WildUAV dataset, a large-scale benchmark for monocular depth estimation in unstructured outdoor environments captured from UAV perspectives.
| Property | Mapping Set | Video Set |
|---|---|---|
| Resolution | 5280 × 3956 (PNG) | 3840 × 2160 (JPG) |
| Sequences | 60 | 42 |
| Total Frames | ~18,000 | ~25,000 |
| Depth Format | .npy (LiDAR-derived) |
.npy |
| Terrain Types | Forest, grassland, rocky, mixed | Various outdoor |
If you use the WildUAV dataset in your research, please cite:
@inproceedings{WildUAV2023,
title = {WildUAV: Monocular UAV Depth Estimation in the Wild},
author = {Xueying Wang and Yanhao Zhang and others},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR)},
year = {2023},
pages = {1--10},
note = {Dataset available at: https://github.com/ewrfWildUAV/WildUAV}
}┌─────────────────────────────────────────────────────────────────┐
│ AEGIS-NET Pipeline │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌─────────────┐ ┌──────────────────────┐ │
│ │ Input │───▶│ Preprocess │───▶│ YOLOv8-Nano Seg │ │
│ │ Image │ │ • CLAHE │ │ • Backbone │ │
│ │ (RGB) │ │ • Normalize │ │ • Seg Head │ │
│ └──────────┘ └─────────────┘ └──────────┬───────────┘ │
│ │ │
│ ┌────────────▼────────────┐ │
│ │ Test-Time Augmentation │ │
│ │ • 3 scales (0.75-1.25) │ │
│ │ • Horizontal flip │ │
│ │ → 6 predictions │ │
│ └────────────┬────────────┘ │
│ │ │
│ ┌──────────────────────────────────────────────▼──────────┐ │
│ │ Uncertainty Estimation | │
│ │ Mean = avg(predictions) Variance = var(preds) │ │
│ │ Confidence = Mean × (1 - Variance) │ │
│ └──────────────────────────────────────────────┬──────────┘ │
│ │ │
│ ┌──────────────────────────────────────────────▼──────────┐ │
│ │ Heatmap Generation & Overlay │ │
│ │ 🟢 Green = Safe 🔴 Red = Unsafe │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
- Python 3.9+
- Node.js 18+
- MongoDB (optional, for full stack)
# Clone the repository
git clone https://github.com/TherealArithmeticProgression/aegis-net.git
cd aegis-net/landing-zone-ai
# Install Python dependencies
cd python-ai
pip install -r requirements.txt
# Download YOLOv8-nano weights (automatic on first run)
python -c "from ultralytics import YOLO; YOLO('yolov8n-seg.pt')"# Terminal 1: Python AI Service
cd python-ai && python app.py
# Terminal 2: Node.js Server
cd server && npm install && npm run dev
# Terminal 3: React Client
cd client && npm install && npm run devlanding-zone-ai/
├── client/ # React Frontend
│ └── src/
├── server/ # Node.js Backend
│ └── routes/
├── python-ai/ # AI Inference Service
│ ├── models/
│ │ ├── yolov8_landing.py # YOLOv8 segmentation
│ │ └── unet_resnet.py # Alternative U-Net model
│ ├── services/
│ │ ├── inference.py # TTA-based prediction
│ │ ├── heatmap.py # Visualization
│ │ └── preprocessing.py # Image transforms
│ ├── utils/
│ │ └── wild_uav_loader.py # Dataset loader + augmentation
│ ├── train.py # Training script (AdamW + Cosine LR)
│ ├── app.py # Flask API
│ └── app_gradio.py # Gradio demo
└── docs/
└── assets/ # Sample images
- Download WildUAV dataset from GitHub
- Place in
python-ai/data/WildUAV/ - Run preprocessing:
python -m scripts.prepare_dataset --data_root data/WildUAV --width 256 --height 256python train.py \
--data_root data/WildUAV_Processed \
--epochs 15 \
--batch_size 4 \
--lr 1e-3Training Features:
- AdamW optimizer with weight decay
- Cosine annealing LR schedule
- CLAHE histogram equalization
- SLIC superpixel label smoothing
- Variance-penalized BCE loss
| Metric | Value |
|---|---|
| Mean IoU | 0.78 |
| Pixel Accuracy | 92.3% |
| Inference Time (CPU) | ~120ms |
| Inference Time (GPU) | ~15ms |
| Model Size | 6.2 MB |
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- WildUAV Dataset - For providing high-quality UAV imagery with depth annotations
- Ultralytics - For the YOLOv8 implementation
- scikit-image - For SLIC superpixel segmentation
-
Wang, X., Zhang, Y., et al. (2023). WildUAV: Monocular UAV Depth Estimation in the Wild. CVPR 2023.
-
Jocher, G., et al. (2023). Ultralytics YOLOv8. https://github.com/ultralytics/ultralytics
-
Achanta, R., et al. (2012). SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE TPAMI.
-
Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML.

