Machine Learning Techniques for Sign Language Recognition (#629)
Read ArticleDate of Conference
July 16-18, 2025
Published In
"Engineering, Artificial Intelligence, and Sustainable Technologies in service of society"
Location of Conference
Mexico
Authors
Osejo, Victor
Ballagán, Mateo
Oñate, Estefanía
Guerrero, Jeffrey
Moya, Viviana
Pilco, Andrea
Vásconez, Juan Pablo
Abstract
In this paper, a sign language recognition system for the Ecuadorian Sign Language vowels (A, E, I, O, U) using Random Forest (RF) and YOLOv8 models is proposed. For this purpose, a new dataset with a total of 500 RGB images in natural light for single-hand gestures was created. RF model used the normalized hand landmark coordinates obtained by using Mediapipe while for real-time gesture detection, YOLOv8 took images with higher resolutions. Hypothesis testing results also showed that the RF model had better accuracy, precision, recall, and computational complexity with the accuracy, precision and Recall scores all 100 % and were preferred for real-time applications. YOLOv8 performance was high with a precision of 100% revealing the model as suitable for tasks related to images. Final real-time inference tests validated our claims of scalability and efficiency of RF as it was able to classify gestures within an average of 0.0055 seconds of inference time. This paper underscores the importance of machine learning models in enhancing inclusion as well as closing communication barriers for the hearing-impaired population.