<< Back

Machine learning models to characterize the cough signal of patients with COVID-19

Read Article

Date of Conference

July 18-22, 2022

Published In

"Education, Research and Leadership in Post-pandemic Engineering: Resilient, Inclusive and Sustainable Actions"

Location of Conference

Boca Raton

Authors

Salamea, Christian

Sánchez, Tarquino

Calderón, Xavier

Guaña, Javier

Castañeda, Paulo

Reina, Jessica

Abstract

Automatic recognition of audio signals is a challenging signal task due to the difficulty of extracting important attributes from such signals, which relies heavily on discriminating acoustic features to determine the type of cough audio coming from COVID-19 patients. In this work, the use of state-of-the-art pre trained models and a convolutional neural network for the extraction of characteristics of a cough signal from patients with COVID-19 is analyzed. A comparison of three machine learning models has been proposed to extract the features containing relevant information, leading to the recognition of the COVID-19 cough signal. The first model is based on a basic convolutional neural network, the second is based on a YAMNet pre-treatment model, and the third is a VGGish pre-trained model. The experimental results carried out with a ComPare 2021 CCS database show that models, of the three, used, the pre-trained VGGish to provide better performance when extracting the characteristics of the audio signals of the COVID-19 cough signal, having as results the performance metrics f1 score and accuracy with values of 30.76% and 80.51%, representing an improvement of 6.06% and 3.61% compared to the YANMet model, and the confusion matrices, which validate the mentioned model.

Read Article