Sign Speak: Sign Language-To-Speech and Speech-To-Sign Language
DOI:
https://doi.org/10.47392/IRJAEH.2025.0079Keywords:
Two-Way Communication, Firebase, Flutter, TensorFlow Lite, Vision-Based Application, Natural Language Processing (NLP), Convolutional Neural Networks (CNN), Gesture Recognition, Sign Language TranslationAbstract
Exchange of words among the community is one of the essential mediums of survival. Developing a vision-based application that translates sign language into intelligible speech and vice versa is the main goal of our suggested technique [1]. The objective is to establish a link between the hearing and hearing-impaired communities and begin two-way conversation[2].This application presents the Android program that allows blind, deaf, and dumb persons to converse with one another. Utilizing technologies such as TensorFlow Lite for gesture recognition, Flutter for mobile development, and Firebase for backend support, the application guarantees flexibility and inclusivity. CNNs and natural language processing (NLP) improve speech and gesture recognition, and offline functionality fixes connectivity problems.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.