Real-Time Sign Language Recognition and Translation Using Deep Learning Techniques

Authors

  • Tazyeen Fathima UG, Artificial Intelligence and Machine Learning Engineering, Dayananda Sagar College of Engineering, Bangalore, Karnataka, India. Author https://orcid.org/0009-0001-2597-1044
  • Ashif Alam UG, Artificial Intelligence and Machine Learning Engineering, Dayananda Sagar College of Engineering, Bangalore, Karnataka, India. Author
  • Ashish Gangwar UG, Artificial Intelligence and Machine Learning Engineering, Dayananda Sagar College of Engineering, Bangalore, Karnataka, India. Author
  • Dev Kumar Khetan UG, Artificial Intelligence and Machine Learning Engineering, Dayananda Sagar College of Engineering, Bangalore, Karnataka, India. Author
  • Prof. Ramya K Assistant Professor, Artificial Intelligence and Machine Learning Engineering, Dayananda Sagar College of Engineering, Bangalore, Karnataka, India. Author

DOI:

https://doi.org/10.47392/IRJAEH.2024.0018

Keywords:

Sign Language Recognition (SLR), Sign Language Translation (SLT), YOLO V5 architecture, Convolution Neural Network (CNN), mAP values

Abstract

Sign Language Recognition (SLR) recognizes hand gestures and produces the corresponding text or speech. Despite advances in deep learning, the SLR still faces challenges in terms of accuracy and visual quality. Sign Language Translation (SLT) aims to translate sign language images or videos into spoken language, which is hampered by limited language comprehension datasets. This paper presents an innovative approach for sign language recognition and conversion to text using a custom dataset containing 15 different classes, each class containing 70-75 different images. The proposed solution uses the YOLOv5 architecture, a state-of-the-art Convolutional Neural Network (CNN) to achieve robust and accurate sign language recognition. With careful training and optimization, the model achieves impressive mAP values (average accuracy) of 92% to 99% for each of the 15 classes. An extensive dataset combined with the YOLOv5 model provides effective real-time sign language interpretation, showing the potential to improve accessibility and communication for the hearing impaired. This application lays the groundwork for further advances in sign language recognition systems with implications for inclusive technology applications.

Downloads

Download data is not yet available.

Downloads

Published

2024-02-23

How to Cite

Real-Time Sign Language Recognition and Translation Using Deep Learning Techniques. (2024). International Research Journal on Advanced Engineering Hub (IRJAEH), 2(02), 93-97. https://doi.org/10.47392/IRJAEH.2024.0018

Similar Articles

51-60 of 156

You may also start an advanced similarity search for this article.