KNOWLEDGE SYSTEMATIZATION ABOUT THE CHARACTERISTICS OF EXISTING TECHNOLOGICAL MEANS FOR ASSISTING PEOPLE WITH VISUAL IMPAIRMENTS
Main Article Content
Abstract
The work is devoted to a detailed review of the main aspects of physical vulnerability for people with visual impairments, as well as technological means for navigation and adaptation to the surrounding environment, which can significantly enhance their sense of safety and security. The relevance of the topic is justified by its large social focus, because such systems help people with visual impairments to socialize more easily and ensure greater inclusion. This is particularly important in urban environments where insufficient attention is paid to inclusivity and the comfort of visually impaired individuals (e.g., lack of audible traffic lights, tactile paving, etc.). The subject of the article is the study of hardware components that ensure the functionality of support systems for people with visual impairments. The goal of this paper is to systematize knowledge about existing technological tools for people with visual impairments and to analyze the hardware characteristics of the components of such solutions. The task of this work is to examine the psychophysiological factors and aspects of physical vulnerability for people with visual impairments, review existing assistance systems for visually impaired individuals, identify the hardware base required for creating a “vision” system of the surrounding environment, and analyze the characteristics of sensors considering the external conditions in which visually impaired people may find themselves. The objectives are achieved through the use of methods such as comparative analysis, classification and categorization, and a systematic review of the literature in the relevant problem domain. The results of the work include a proposed classification of assistive devices for people with visual impairments, which encompasses the following classes: navigation applications and devices; sensory systems for obstacle and object detection; wearable devices with augmented reality (AR) features; “vision” systems for the surrounding environment; and text recognition systems. The evaluation and analysis of the advantages and disadvantages of devices in each of these classes demonstrate that a new solution should meet the criteria of compactness, wearability, energy efficiency, ease of use, and high accuracy in detecting environmental conditions, obstacles, and objects on the user's path. Conclusions. To ensure data complementarity in tasks of detecting moving objects in intelligent assistance systems for visually impaired individuals, the optimal approach is to combine multiple sensors using the Multisensor Fusion methodology. Specifically, this involves high-resolution cameras that provide detailed scene imaging and LiDARs that ensure precise distance measurement and 3D modeling of the environment. Such an approach compensates for the limitations of individual sensors and provides a more comprehensive understanding of the scene, improving data quality through the integration of diverse information sources. Further research will focus on conducting experimental research aimed at practically justifying the joint use of cameras, audio sensors, and LiDARs for obtaining heterogeneous data that provide the most comprehensive depiction of the environment surrounding visually impaired individuals.
Article Details
References
Turner, R. J. and McLean, P. D. (1989), “Physical disability and psychological distress”, Rehabilitation Psychology, vol. 34, no. 4, pp. 225–242, doi: https://doi.org/10.1037/h0091727
Turner, J. B. and Turner, R. J. (2004), “Physical Disability, Unemployment, and Mental Health”, Rehabilitation Psychology, vol. 49, no. 3, pp. 241–249, doi: https://doi.org/10.1037/0090-5550.49.3.241
Stebnicki, M. A. and Marini, I. (2012), The psychological and social impact of illness and disability, Springer Publishing Company, available at: https://www.amazon.com/Psychological-Social-Illness-Disability-Seventh/dp/0826161618
Barkovska, O. and Serdechnyi, V. (2024), “Intelligent assistance system for people with visual impairments”, Innovative technologies and scientific solutions for industries, vol. 2(28), pp. 6–16, doi: https://doi.org/10.30837/2522-9818.2024.28.006
Rahman, M. W., Tashfia, S. S., Islam, R., Hasan, M. M., Sultan, S. I., Mia, S. and Rahman, M. M. (2021), “The architectural design of smart blind assistant using IoT with deep learning paradigm”, Internet of Things, vol. 13, no. 100344, doi: https://doi.org/10.1016/j.iot.2020.100344
Tahoun, N., Awad, A. and Bonny, T. (2019), “Smart assistant for blind and visually impaired people”, Proceedings of the 3rd International Conference on Advances in Artificial Intelligence, pp. 227–231, doi: https://doi.org/10.1145/3369114.3369139
Liu ,Y., Stiles, N. R. B. and Meister, M. (2018), “Augmented reality powers a cognitive assistant for the blind”, ELife, vol. 7, no. e37841, doi: https://doi.org/10.7554/eLife.37841
Tanveer, M. S. R., Hashem, M. M. A. and Hossain, M. K. (2015), “Android assistant EyeMate for blind and blind tracker”, 2015 18th international conference on computer and information technology (ICCIT), IEEE, pp. 266–271, doi: https://doi.org/10.1109/ICCITechn.2015.7488080
Sadi, M. S., Mahmud, S., Kamal, M. M., and Bayazid, A. I. (2014), “Automated walk-in assistant for the blinds”, 2014 International Conference on Electrical Engineering and Information & Communication Technology, doi: https://doi.org/10.1109/iceeict.2014.6919037
Kumar, K. and Mappoli, P. (2020), “An intelligent Assistant for the Visually Impaired & blind people using machine learning”, International Journal of Imaging and Robotics, vol. 20, no. 3, available at:
http://www.ceser.in/ceserp/index.php/iji/article/view/6521
Petrovska, I., Kuchuk, H. and Mozhaiev, M. (2022), “Features of the distribution of computing resources in cloud systems”, 2022 IEEE 4th KhPI Week on Advanced Technology, KhPI Week 2022 - Conference Proceedings, 03-07 October 2022, Code 183771, doi: https://doi.org/10.1109/KhPIWeek57572.2022.9916459
Hunko, M., Tkachov, V., Kuchuk, H. and Kovalenko, A. (2023), “Advantages of Fog Computing: A Comparative Analysis with Cloud Computing for Enhanced Edge Computing Capabilities”, 2023 IEEE 4th KhPI Week on Advanced Technology, KhPI Week 2023 - Conference Proceedings, Code 194480, doi: https://doi.org/10.1109/KhPIWeek61412.2023.10312948
Khan, S., Nazir, S. and Khan, H. U. (2021), “Analysis of Navigation Assistants for Blind and Visually Impaired People: A Systematic Review”, IEEE Access, vol. 9, pp. 26712–26734, doi: https://doi.org/10.1109/ACCESS.2021.3052415
Kuchuk, H. and Kuliahin, A. (2024), “Hybrid Recommender For Virtual Art Compositions With Video Sentiments Analysis”, Advanced Information Systems, vol. 8, no. 1, pp. 70–79, doi: https://doi.org/10.20998/2522-9052.2024.1.09
Kuchuk, H., Podorozhniak, A., Liubchenko, N. and Onischenko, D. (2021), “System of license plate recognition considering large camera shooting angles”, Radioelectronic and Computer Systems, vol. 2021 (4), pp. 82–91, doi: https://doi.org/10.32620/REKS.2021.4.07
Sabab, S. A. and Ashmafee, M. H. (2016), “Blind reader: An intelligent assistant for blind”, 2016 19th Int. Conference on Computer and Information Technology, ICCIT, IEEE, pp. 229-234, doi: https://doi.org/10.1109/ICCITECHN.2016.7860200
Svyrydov, A., Kuchuk, H. and Tsiapa, O. (2018), “Improving efficienty of image recognition process: Approach and case study”, Proceedings of 2018 IEEE 9th International Conference on Dependable Systems, Services and Technologies, DESSERT 2018, pp. 593–597, doi: https://doi.org/10.1109/DESSERT.2018.8409201
Kharchenko, V., Andrashov, A., Sklyar, V., Kovalenko, A. and Siora, O. (2012), “Gap-and-IMECA-based assessment of I&C systems cyber security”, Advances in Intelligent and Soft Computing, 170 AISC, pp. 149–164, doi: https://doi.org/10.1007/978-3-642-30662-4-10
Cahyadi, W. A., Chung, Y. H., Ghassemlooy, Z. and Hassan, N. B. (2020), “Optical camera communications: Principles, modulations, potential and challenges”, Electronics, vol. 9, no. 9, 1339, doi: https://doi.org/10.3390/electronics9091339
Raj, T., Hashim, F.H., Huddin, A.B., Ibrahim, M.F. and Hussain, A. (2020), “A Survey on LiDAR Scanning Mechanisms”, Electronics, vol. 9, no. 741, doi: https://doi.org/10.3390/electronics9050741
Qu, Y., Yang, M., Zhang, J., Xie, W., Qiang, B. and Chen, J. (2021), “An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation”, Sensors, vol. 21, no. 1605, doi: https://doi.org/10.3390/s21051605
Chen, G., Liu, Z., Yu, G. and Liang, J. (2021), “A new view of multisensor data fusion: research on generalized fusion”, Mathematical Problems in Engineering, vol. 2021, no. 5471242, pp. 1–21, doi: https://doi.org/10.1155/2021/5471242
Tsanousa, A., Bektsis, E., Kyriakopoulos, C., González, A.G., Leturiondo, U., Gialampoukidis, I., Karakostas, A., Vrochidis, S. and Kompatsiaris, I. (2022), “A Review of Multisensor Data Fusion Solutions in Smart Manufacturing: Systems and Trends”, Sensors, vol. 22, no. 1734, doi: https://doi.org/10.3390/s22051734
Marsh, B.; Sadka, A.H. and Bahai, H. (2022), “A Critical Review of Deep Learning-Based Multi-Sensor Fusion Techniques”, Sensors, vol. 22, no. 9364, doi: https://doi.org/10.3390/s22239364
Li, N., Ho, C. P., Xue, J., Lim, L. W., Chen, G., Fu, Y. H. and Lee, L. Y. T. (2022), “A progress review on solid‐state LiDAR and nanophotonics‐based LiDAR sensors”, Laser & Photonics Reviews, vol. 16, doi: https://doi.org/10.1002/lpor.202100511