Indian Sign Language to Multilingual Text Using Deep Learning
DOI:
https://doi.org/10.47392/IRJAEH.2025.0447Keywords:
3D Convolutional Neural Network (3D-CNN), Dynamic Gesture Recognition, Indian Sign Language (ISL), Multilingual TranslationAbstract
In order to improve communication accessibility for people with speech and hearing impairments, sign language recognition is essential. A 3D Convolutional Neural Network (3D-CNN) is used in this study for sign language recognition system to categorize hand gestures from video clips. The suggested model ensures high accuracy and efficiency by classifying and extracting features from video frames. Several sign gestures are included in the dataset, which has undergone extensive preprocessing methods like frame extraction, augmentation, and normalization to increase robustness. By incorporating multilingual translation capabilities, the system expands its accessibility by translating recognized gestures into text in Hindi, Kannada, and English. The model's efficiency in practical applications, such as assistive communication devices and human-computer interaction, is demonstrated by our experimental results, which show an astounding 99% accuracy. This study aids in the advancement of inclusive AI-powered solutions that help people with hearing loss communicate.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.