Visual Recognition Based Mobile Application for Visually Impaired People
DOI:
https://doi.org/10.47392/IRJAEH.2025.0300Keywords:
Visual Impairment, Assistive Technology, Real- time Object Detection, Text-to-Speech (TTS), YOLO, Mobile Application, TensorFlow Lite, Minimal Interaction, Offline Accessibility, React Native, Cross-platform DevelopmentAbstract
Visually impaired users find it extremely hard to navigate environments, recognize objects, and read printed let- ters. Existing assistive technologies are marred with high costs, clunky interfaces, and reliance on internet connectivity. This paper presents a mobile application that solves these problems utilizing real-time computer vision and AI. The application integrates YOLO for object recognition, Tesseract-based OCR for text recognition, and TTS synthesis for multimodal, hands-free output. Advances include voice-controlled automatic switching of modes, offline operation with TensorFlow Lite power, and cross-platform compatibility with a React Native framework. Performance experiments demonstrate 95% text recognition accuracy under favorable lighting and 89% object detection ac- curacy with execution time at 28ms on average—much improved over existing solutions like Google Lookout. The application is immune to internet connectivity, reduces cognitive load through voice-controlled interaction, and is less expensive with stock smartphone hardware. The comparative study demonstrates advantages of affordability, accuracy, and offline capability, and the solution is a practical tool to render visually impaired users independent. Future development involves expanding the number of detectable objects and depth sensing to minimize optical illusion error.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.