Please use this identifier to cite or link to this item: https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4772
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSilva, P.L.C.S.-
dc.date.accessioned2024-09-26T09:18:10Z-
dc.date.available2024-09-26T09:18:10Z-
dc.date.issued2021-
dc.identifier.urihttps://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4772-
dc.description.abstractIndividuals who are differently-able in vision cannot proceed with their day-to-day activities smoothly as other people do. Especially independent walking is a hard target to achieve with their visual impairment. Assistive technology to aid the mobility of blind people is an emerging area where several scientific contributions have been made to assist the navigation of visually impaired people by mainly facilitating the autonomous execution of intelligent environments and accessible context-aware smart navigation aids. However, most assistive navigation aids depend on the measurements acquired by a single type of sensor attached to the user. The amount of research on combining multiple sensors in assistive navigation aids for visually impaired navigation is limited. Most work is targeted at sensor integration but not at sensor fusion. Another observation was a lack of integration of navigational sub-processes such as obstacle detection, localization, motion planning, and context awareness among the navigational aids for visually impaired persons in the literature. This thesis introduces an assistive navigation framework that consists of sub-processes such as object detection, recognition, localization, context-awareness and motion planning based on fusions of homogeneous and heterogeneous sensors to complement the strengths of different sensors to overcome the drawbacks of individual sensor types. The research questions of this thesis were approached via five investigations, correspond to the aforementioned sub-processes of the proposed framework, and are evaluated over several proofs of concept architectures during the investigation path of the thesis. First, obstacle detection consists of a set of sonar sensors that can detect obstacles in visually impaired navigation. Subsequently, the homogeneous fusion between two ultrasonic sensors was carried out to improve ground-level obstacle detection based on an extended Kalman filter. Second, in the sub-processes of obstacle recognition, vision sensor and computer vision allow users to determine the objects around them, which was impossible when using an ultrasonic sensor alone. Third, localization is based on the fusion of measurements acquired by the Global Positioning System sensor and inertial sensor using the error state extended Kalman filter-based state estimation approach. Fourth, the sub-process of motion planning is based on obstacle detection and localization outputs. Fifth, context analysis considers the amount of safety during the navigation, and it is tested using actual scene data collected from a set of sonar sensors and individual subject data obtained from a personalized smartphone application. Finally, iii multi-modal feedback gives feedback to the navigator using audio and tactile cues on these sub-processes. Investigating and determining the optimal use of complementary sensors in terms of the type and number of sensorial channels to aid visually impaired persons in a dynamic real-world setting is a crucial challenge. Hence simulation-based usability evaluation experiments are a pragmatic and cost-effective approach in such studies. This thesis has designed and implemented a three-dimensional simulation-based test-bed for usability evaluations of navigation sub-processes such as obstacle detection, recognition, localization and motion planning. The usability evaluation experiments conducted in a simulated environment led to significant findings comparable to a real-world setting. To analyze and benchmark the results obtained from the three-dimensional simulated environment, it is necessary how visually impaired persons in real-world situations can use the proof of concept prototype. Therefore, evaluation experiments are carried out in a controlled, real-world environment. The real-world evaluation experiment protocol involves selecting a sampling plan, setting up the controlled environments, and conducting the experiments. Finally, results are analyzed under each navigation sub-processes in the controlled, real-world environment. The sub-process of obstacle detection consists of a set of sonar sensors that gives the average frontal obstacle detection with the highest score, 98%, while left and right obstacle detection is 89% and 86%, respectively. The bench-marking of estimated localization data obtained from error state-extended Kalman filter-based fusion of the Global Positioning System sensor and inertial sensor with the ground truth reference shows only a 28.37% relative error percentage. When evaluating feedback, voice feedback has achieved a higher score between 8.5-10 than tactile feedback between 6-8 on the Likert scale grading from 0-10. In conclusion, this thesis presents a novel framework to integrate several navigational sub-processes with sensor fusion. The proposed approach consists of several contributions to field sensor fusion in visually impaired navigation—for instance, a novel homogeneous sensor fusion algorithm based on the extended Kalman filter to fuse multiple sonar sensors. Subsequently, a novel heterogeneous sensor integration approach is proposed to integrate vision and sonar sensors. Moreover, a complementary sensor fusion algorithm based on the error state-extended Kalman filter is introduced to fuse inertial and Global Positioning System sensors for localization. A novel iv hybrid walking context estimation method based on environmental adaptation and personalization is also introduced. The proposed framework can be extended to include more navigational sub-processes with additional sensors to provide an independent navigation experience for visually impaired people, and the proof of concept implementations are scalable to incorporate the extensions of future assistive technologies. Most importantly, to the best of our knowledge, the three-dimensional simulation developed in this thesis is the first simulation that investigated a novel hybrid sensor fusion approach for visually impaired navigation, including the sub-processes of obstacle detection, obstacle recognition, localization, and motion planning in the three-dimensional simulated environment.en_US
dc.language.isoenen_US
dc.titleA Sensor Fusion Framework for Visually Impaired Navigation Aen_US
dc.typeThesisen_US
Appears in Collections:2021

Files in This Item:
File Description SizeFormat 
PhD_PLCS Silva2021.pdf3.21 MBAdobe PDFView/Open


Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.