Skip to content

The SAVN system consists of an Audio Based Sentiment Analysis (ABSA) model to analyze the sentiments being portrayed in a voice note which will be hosted on the AWS cloud service platform in order to provide individuals a reliable way of measuring the human quality of spoken conversations.

Notifications You must be signed in to change notification settings

jendcruz22/SentimentAnalysisOfVoiceNotes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

76 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sentiment Analysis of VoiceNotes (SAVN)

The SAVN system consists of an Audio Based Sentiment Analysis (ABSA) model to analyze the sentiments being portrayed in a voice note which will be hosted on the AWS cloud service platform in order to provide individuals a reliable way of measuring the human quality of spoken conversations.

Table of Contents:

  1. About
  2. Folder Structure
  3. How it works?
  4. Installation
  5. How to run the application?
  6. What's what?
  7. Demonstration
  8. Collaborators
  9. References

Voice notes are recorded by over hundreds of people everyday, all around the world. A voice note essentially is created by speaking into an electronic device. It can be used to deliver important and time-sensitive information or it can also be a recording of a conversation. A system that analyses voice notes for their sentiments would be a reliable way of measuring the human quality of spoken conversations over a recording. This is where our SAVN system steps in.

SentimentAnalysisOfVoiceNotes
├───assets
│   ├───models
│   │   ├───audio_sentiment_model
│   └───pickles
├───sentiment_analysis
├───static
│   ├───css
│   ├───img
│   ├───js
│   ├───temp
│   └───vendor
├───templates
├───tests
└───uploads

SAVN analyses the emotions being conveyed in an audio recording. The user will enter a MP3 audio file as an input to the system. The features of the input audio will be extracted. The audio features are then analysed by the ABSA model that generates a list of predicted sentiments. The predicted sentiments, along with its respective segmented audio file will be presented to the user as the output.

Clone the repository. Before installing the requirements, create a python or conda environment. This is an important step as the tensorflow version being used by this system (2.4.1) works only on Python 3.7 and not any of the later versions.

4.1 Creating a Python environment:

Open your terminal and install the virtual environment tool with pip as follows :

pip install virtualenv

After the virtualenv has been installed, cd to the folder where you've saved this application from the terminal and run the following command to create a virtual environment :

cd path_to_folder
virtualenv -p python3.7.10 env_name

Activate your environment :

env_name\Scripts\activate

4.2 Creating a conda environment :

Open your Anaconda prompt (You can also use miniconda). Create a conda environment using the following command :

conda create -n env_name python=3.7.10 anaconda

After successfully creating your environment, activate it by running :

conda activate env_name

Once you have created an environment using either one of the above methods, install the application's requirements :

pip install -r requirements.txt

Open your terminal, activate your python/conda environment and run the app.py file using the following command :

python app.py

or

flask run
  • This application has been created using Flask, Bootstrap, JQuery, and Ajax.

  • The app.py file consists of the flask application.

  • This flask application uses various templates that are created using HTML and are stored in the templates folder.

  • The CSS and JavaScript files used by the HTML templates are stored in the static folder.

  • The main page ie the index.html file consists of the basic details of this application: How it works, about the system, about the team, etc.

  • This application accepts mp3 audio files as input (upload_file.html) which is saved in the uploads folder.

  • This audio is then segmented, processed and analysed by our ABSA model which in turn generates a list of predicted sentiments.

  • The segmented audio along with the predicted emotion for each audio segment is displayed as the output to the user. (download.html)

  • The assets and sentimentanalysis folders consist of the models used, the dependencies and other code that is used to run the application.

Click on the GIF to watch the demonstration video. SAVN System's Demonstration


About

The SAVN system consists of an Audio Based Sentiment Analysis (ABSA) model to analyze the sentiments being portrayed in a voice note which will be hosted on the AWS cloud service platform in order to provide individuals a reliable way of measuring the human quality of spoken conversations.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •