This work was supported by the project ''Soundlights: Distributed Open Sensors Network and Citizen Science for the Collective Management of the City's Sound Environments'' (9382417), a collaboration between the Music Technology Group (Universitat Pompeu Fabra) and Bitlab Cooperativa Cultural.
It is funded by BIT Habitat (Ajuntament de Barcelona) under the program La Ciutat Proactiva; and by the IA y Música: Cátedra en Inteligencia Artificial y Música (TSI-100929-2023-1) by the Secretaría de Estado de Digitalización e Inteligencia Artificial and NextGenerationEU under the program Cátedras ENIA 2022.
- Amaia Sagasti, Martín Rocamora, Frederic Font: Prediction of Pleasantness and Eventfulness Perceptual Sound Qualities in Urban Soundscapes - DCASE Workshop 2024 Paper link DCASE webpage
- Amaia Sagasti Martínez - MASTER THESIS: Prediction of Pleasantness and Eventfulness Perceptual Sound Qualities in Urban Soundscapes - Sound and Music Computing Master (Music Technology Group, Universitat Pompeu Fabra - Barcelona) Master Thesis Report link Zenodo
The code is implemented in a RaspberryPi model B. The device has a microphone connected as well as a Mobile Network Module with a SIM card.
This section provides all the necessary information to set up the working environment.
NOTE: This project is only compatible with 64-bit RaspberryPi architecture. Check your architecture by opening a terminal and running:
uname -m
If the output is aarch64, you have a 64-bit ARM architecture --> COMPATIBLE
The followiing list details the set up process:
Python 3.10.14 download web.
Follow instructions below (or link)
sudo apt-get update
sudo apt-get install build-essential tk-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev libffi-dev
wget https://www.python.org/ftp/python/3.10.14/Python-3.10.14.tar.xz # or download directly from link above
# Navigate to directory where download is
tar -xvf Python-3.10.14.tar.xz
cd Python-3.10.14
./configure --enable-optimizations # be patient
make -j 4 # be waaay more patient here
sudo make altinstall
python3.10 --version # to verify installation
It is recommended to create a virtual environment. Example with venv:
# Go to home directory
/usr/local/bin/python3.10 -m venv my_env
# to activate
source my_env/bin/activate
# to deactivate
my_env deactivate
This code uses LAION-AI's CLAP model. Install CLAP with:
git clone https://github.com/LAION-AI/CLAP.git
Finally, install sens-sensor specific requirements. For that, navigate your terminal to the SENS project folder and run:
cd sens-sensor
pip install -r requirements.txt
Now you are ready to start using sens-sensor repository.
This code runs in combination with path/to/KeAcoustics/main.py in order to have the sensor capturing sound, storing segments of audio files, deleting the oldest segments, and processing joint segments to make predictions of pleasantness, eventfulness and sound sources.
-
path/to/KeAcoustics/main.py code is in charge of recording the audio and handling the saving of audio segments. The following are the two main parameters to consider from path/to/KeAcoustics/parameters.py:
spl_time
--> Time used for SPL average in seconds (time length of each segment)maintain_time
--> Time in seconds that we want to preserved stored segments
-
In this code, path/to/calculation.py is in charge of reading the audio stored and making predictions. Firstly, it loads the model in initialization function. Then it joins the saved audio. The number of segments to join are given in n_segments function. Finally, the audio is used for the predictions, and these values are saved in a txt file.
NOTE: Run with the created environment
-
Also in this code, path/to/main.py works similar to calculation.py but additionally sends the predictions via TCP to specified address
python main.py <ip address> <ip port>
NOTE: Run with the created environment
Distributed under the ...