Real-Time Sign Language Recognition and Translation Using Deep Learning Techniques
DOI:
https://doi.org/10.47392/IRJAEH.2024.0018Keywords:
Sign Language Recognition (SLR), Sign Language Translation (SLT), YOLO V5 architecture, Convolution Neural Network (CNN), mAP valuesAbstract
Sign Language Recognition (SLR) recognizes hand gestures and produces the corresponding text or speech. Despite advances in deep learning, the SLR still faces challenges in terms of accuracy and visual quality. Sign Language Translation (SLT) aims to translate sign language images or videos into spoken language, which is hampered by limited language comprehension datasets. This paper presents an innovative approach for sign language recognition and conversion to text using a custom dataset containing 15 different classes, each class containing 70-75 different images. The proposed solution uses the YOLOv5 architecture, a state-of-the-art Convolutional Neural Network (CNN) to achieve robust and accurate sign language recognition. With careful training and optimization, the model achieves impressive mAP values (average accuracy) of 92% to 99% for each of the 15 classes. An extensive dataset combined with the YOLOv5 model provides effective real-time sign language interpretation, showing the potential to improve accessibility and communication for the hearing impaired. This application lays the groundwork for further advances in sign language recognition systems with implications for inclusive technology applications.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Research Journal on Advanced Engineering Hub (IRJAEH)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.