On-Device Ship Identification from Underwater Noise
This repository accompanies the paper:
ShipNN: On-Device Ship Identification from Underwater Noise
Kainat Altaf, Momin Ali, Laura Harms, Christian Renner, Olaf Landsiedel
Proceedings of the ACM Symposium on Applied Computing (SAC), 2026
The code is provided to support reproducibility, transparency, and further research in embedded and edge-based underwater acoustic monitoring.
ShipNN implements the complete experimental pipeline used in our paper for ship identification from underwater audio signals under resource-constrained settings.
The repository includes:
- Audio dataset preprocessing and metadata generation
- Time–frequency feature extraction using spectrogram representations
- Deep learning models adapted for underwater acoustic data
- Training and evaluation pipelines aligned with the experimental setup in the paper
The focus is on lightweight CNN architectures suitable for on-device inference, enabling deployment on embedded platforms such as microcontrollers and low-power edge devices.
This repository is intended as a research codebase to:
- Reproduce the results reported in the paper
- Support ablation studies and architectural comparisons
- Serve as a reference implementation for embedded ship classification
It is not designed as a general-purpose or production-ready framework.
.
├── config/
│ └── mobilenet.yaml # Experiment configuration used in the paper
│
├── dataset/
│ ├── create_metadata.py # Audio metadata generation
│ ├── data_loader.py # Dataset and DataLoader definitions
│ └── generate_spectrogram.py# Spectrogram extraction utilities
│
├── mobilenet.py # MobileNet-based model architecture
├── resnet.py # ResNet-based baseline architecture
├── models.py # Shared model components
├── run.py # Main entry point for training and evaluation
└── ReadMe.md # Documentation
The workflow follows the experimental methodology described in the paper:
-
Metadata Generation
Raw underwater audio recordings are indexed and labeled using scripts indataset/. -
Feature Extraction
Audio signals are transformed into time–frequency representations suitable for CNN-based learning. -
Model Configuration
Model architecture, preprocessing parameters, and training settings are defined via YAML configuration files. -
Training and Evaluation
Experiments are executed through a unified pipeline usingrun.py.
To run an experiment using the configuration reported in the paper:
python run.py --config config/mobilenet.yamlAll key hyperparameters and preprocessing settings are defined in the configuration file.
- Results may vary slightly depending on hardware, random seeds, and software versions.
- The repository focuses on the training and evaluation pipeline; embedded deployment and latency measurements are discussed in the paper.
If you use this code in your research, please cite the following paper:
@inproceedings{altaf2026shipnn,
title = {ShipNN: On-Device Ship Identification from Underwater Noise},
author = {Altaf, Kainat and Ali, Momin and Harms, Laura and Renner, Christian and Landsiedel, Olaf},
booktitle = {Proceedings of the ACM Symposium on Applied Computing (SAC)},
year = {2026}
}This code is released for academic and research use.
Please refer to the repository for license details.
