
The Smart Environmental Noise System (SENS) is an advanced sensor technology designed for real-time acoustic monitoring, with a focus on urban environments. Built on a Raspberry Pi platform, SENS captures sound continuously and processes it locally using custom-developed software based on small and efficient artificial intelligence algorithms. SENS calculates acoustic parameters, including Sound Pressure Level (SPL), and makes predictions of the perceptual sound attributes of pleasantness and eventfulness (ISO 12913), along with detecting the presence of specific sound sources such as vehicles, birds, and human activity, among others. To safeguard privacy, all processing occurs directly on the device in real-time ensuring that no audio recordings are permanently stored or transferred. Additionally, the system transmits the extracted audio representation through the wireless network to a remote server, made possible using mobile data connectivity. SENS technology represents an innovative step in environmental noise monitoring, offering real-time processing and robust privacy protection. A single SENS device or a network of them could serve as a powerful tool for understanding the acoustic characteristics of soundscapes with efficiency and flexibility.
This work was supported by the project ''Soundlights: Distributed Open Sensors Network and Citizen Science for the Collective Management of the City's Sound Environments'' (9382417), a collaboration between the Music Technology Group (Universitat Pompeu Fabra) and Bitlab Cooperativa Cultural.
It is funded by BIT Habitat (Ajuntament de Barcelona) under the program La Ciutat Proactiva; and by the IA y Música: Cátedra en Inteligencia Artificial y Música (TSI-100929-2023-1) by the Secretaría de Estado de Digitalización e Inteligencia Artificial and NextGenerationEU under the program Cátedras ENIA 2022.
- Amaia Sagasti, Frederic Font, Xavier Serra: SENS (Smart Environmental Noise System) - Urban Sound Symposium 2025 Poster Link Zenodo
- Amaia Sagasti, Martín Rocamora, Frederic Font: Prediction of Pleasantness and Eventfulness Perceptual Sound Qualities in Urban Soundscapes - DCASE Workshop 2024 Paper link DCASE webpage
- Amaia Sagasti Martínez - MASTER THESIS: Prediction of Pleasantness and Eventfulness Perceptual Sound Qualities in Urban Soundscapes - Sound and Music Computing Master (Music Technology Group, Universitat Pompeu Fabra - Barcelona) Master Thesis Report link Zenodo
SENS combines hardware and software in an intelligent acoustic sensor for monitoring urban spaces. Nevertheless, the software can work on its own, allowing to use SENS technology on any device with a microphone (like a laptop). Additionally, SENS algorithm can be simulated on a pre-recorded audio. See the section details that suit best your needs:
You can find the trained models in data/models.
SENS is implemented in a RaspberryPi model B 4GB RAM with 64-bit architecture operative system. The device has a microphone connected as well as a Mobile Network Hat with a SIM card. Additionally, three LED pins are connected and configured to signal the correct performance of the sensor. The ensembled components are placed inside an IP67 plastic case.
To run the software, first, follow de instructions in Environment set up to prepare your working environment. Download the required models from Zenodo and place them in sens-sensor/data/models/. Then, it is advised to check Code Structure to understand how the code works. Do not forget to calibrate the microphone, check Microphone calibration. To run, simply do
# Open three terminals and activate the environment in all of them
cd sens-sensor
# Terminal 1
python main_send.py [microphone input, integer]
# Terminal 2
python main_process.py
# Terminal 3
python main_send.py
Some notes:
- Suggestion for microphone: link
- Suggestion for communication hat: link
- Suggestion for LED pins: link
- Suggestion for IP65 plastic case (check sizes): link
- Internet data consumption: in our case, we were making predictions every 3 seconds (generating 20 predictions per minute), but sending one message for every batch of 10 predictions (so one send every 30 seconds approximately). Every message sent is about 20KB, which leads to 24 hours x 60 minutes x 2 messages x 20KB = 58MB per day or 1'7GB per month. We use SIM cards of 4GB of internet data.
- You will need to modify the "sending" part of the code structure to send the real-time predictions to your own server.
- You can modify the AI models that you want to use from parameters.py
- If you want the sensor to start running as soon as it boots, it is adviced to prepare three service files, one for each part (capturing, processing and sending).
(Product links may be outdated)
You can simply run the SENS code in any device that has a microphone input, like a laptop.
For this, you should, first, follow de instructions in Environment set up to prepare your working environment. Download the required models from Zenodo and place them in sens-sensor/data/models/. Then, it is advised to check Code Structure to understand how the code works. Do not forget to calibrate the microphone, check Microphone calibration. To run, simply do
# Open three terminals and activate the environment in all of them
cd sens-sensor
# Terminal 1
python main_send.py [microphone input, integer]
# Terminal 2
python main_process.py
# Terminal 3
python main_send.py
Some notes:
- You will need to modify the "sending" part of the code structure to send the real-time predictions to your own server.
- You can modify the AI models that you want to use from parameters.py
- If you want the sensor to start running as soon as it boots, it is adviced to prepare three service files, one for each part (capturing, processing and sending).
We have prepared a little piece of code that simulates SENS functioning. This script will process an input pre-recorded audio file as if its audio was being captured by the microphone. This code returns the predictions that SENS algorithms generate as well as some graphs that display the results in a much more understandable way.
It is adviced to use this algorithm on short audios (< 1 minute) to have clearer output graphs. If you use a longer audio, graphs will not be as clear.
For this, you should, first, follow de instructions in Environment set up to prepare your working environment. Download the required models from Zenodo and place them in sens-sensor/data/models/. Then, simply run in a terminal:
cd sens-sensor
python simulate_SENS.py
Note: you may get errors with RPi.GPIO library as the code is expecting LED pins connected to the RaspberryPi. Simply delete this code lines.
This section provides all the necessary information to set up the working environment.
NOTE: This project is only compatible with 64-bit RaspberryPi architecture. Check your architecture by opening a terminal and running:
uname -m
If the output is aarch64, you have a 64-bit ARM architecture --> COMPATIBLE
The followiing list details the set up process:
Python 3.10.14 download web.
Follow instructions below (or link)
sudo apt-get update
sudo apt-get install build-essential tk-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev libffi-dev portaudio19-dev
wget https://www.python.org/ftp/python/3.10.14/Python-3.10.14.tar.xz # or download directly from link above
# Navigate to directory where download is
tar -xvf Python-3.10.14.tar.xz
cd Python-3.10.14
./configure --enable-optimizations # be patient
make -j 4 # be waaay more patient here
sudo make altinstall
python3.10 --version # to verify installation
It is recommended to create a virtual environment. Example with venv:
# Go to home directory
/usr/local/bin/python3.10 -m venv my_env
# to activate
source my_env/bin/activate
# to deactivate
my_env deactivate
This code uses LAION-AI's CLAP model. Install CLAP with:
git clone https://github.com/LAION-AI/CLAP.git
Finally, install sens-sensor specific requirements. For that, navigate your terminal to the SENS project folder and run:
cd sens-sensor
pip install -r requirements.txt
Now you are ready to start using sens-sensor repository.
SENS measures Leq and LAeq in real-time. The microphone needs to be calibrated. You can use the calibration you want, as long as in parameters.mic_calib_path you specify the file path to the txt file that contains the calibration factor by which the captured waveform needs to be multiplied in order to get the V_rms signal (from which we can calculate dBs).
If you own a microphone calibrator that plays a 1KHz signal at the standard 94dB like example, simply activate the environment and run the following command in the terminal to calibrate:
python calibration.py [microphone input, integer]
The following three images indicate the 3 main processes that create SENS working. Each process is executed through a different python script that is called in different terminal windows.
NOTE: All inputs indicated in orange color indicate that they are defined in parameters.py This file contains all the configuration parameters, variables and paths to the necessary files.
Together with the three main scripts, we provide extra code for other processes involved in SENS project context but not necessary to be run in the actual developped sensor.
-
simulate_SENS.py: This script simulates SENS sensor on a pre-recorded WAV audio. The input audio is analysed in chunks of audio data. The resulting output are saved in JSON files containing the predictions, Leq and LAeq of each audio chunk.
-
main_send_lib.py: SENS sensor was used in an experiment developed in a library under the framework of a project called SOUNDLIGHTS. This project research how real-time message signs could be use to improve the acoustic quality. The sensor was connected through the wireless network to a LED screen (connected to another RaspberryPi). In this scenario, SENS was calculating SPL and predicting the presence of human activity. This information was sent to the LED screen together with a threshold value using this script. The LED screen interpreted the message and prompted different messages, with the goal to influence people behaviour inside the library. Additionally, the sensor through main_send_lib.py also sends the resulting predictions to the remote server.
See LICENSE
for more information.