AIoT Smart Eyewear with Real-Time Object and Audio Recognition for Visually Impaired Users
Abstract
The goal of the proposed smart eyewear tool is to assist visually impaired (VI) or blind individuals by providing object detection and real-time audio descriptions of their surrounding environment. This will help these users in navigating their surroundings safely and independently. This is achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) technologies into a compact, wearable system. The proposed work presents a real-time, artificial intelligence of things (AIoT) based assistive system designed by using an ESP32-CAM module for real-time object detection, integrated with a TF-Luna LiDAR sensor for distance measurement, and a Bluetooth-enabled neckband for audio feedback. The system uses YOLO for object detection and Roboflow for dataset preparation and training. It also includes features such as night-time navigation and an emergency alert for enhanced safety. Experimental results showed high performance in various lighting conditions, with object detection accuracy of 94.93%, a mean absolute error (MAE) of 0.34 cm, and a root mean square error (RMSE) of 0.44 cm in distance estimation. The SOS alert system responded with 100% accuracy in emergency situations.
Authors
Pritam Nanda; Soumya Ranjan Samal; Shuvabrata Bandopadhaya; Debi Prasad Pradhan; Antoni Ivanov; Vladmir Poulkov
Venue
Journal of Mobile Multimedia ( Volume: 22, Issue: 1, January 2026)
Links
https://ieeexplore.ieee.org/abstract/document/11456474
Keywords
Assistive system; Roboflow; Tesseract OCR; TF-Luna LiDAR; Visually Impaired; VIuNI; YOLO
