FUZZY IMAGE CLASSIFIER IN LARGE DYNAMIC DATABASES
Main Article Content
Abstract
In today's rapidly growing visual information environment, the task of efficient image search and classification in large dynamic databases, which are updated daily with tens of thousands of new objects, is becoming especially relevant. Such databases are characterized not only by their significant size but also by a high degree of variability, which requires the development of algorithms capable of quickly and accurately recognizing distorted or modified versions of images in conditions of limited response time. The subject of study is a fuzzy classifier for clustering distorted versions of images in large dynamic databases. The aim of this work is to increase the accuracy of fast searches for distorted versions of images in large dynamic databases, in which the speed of adding information reaches 10-12 thousand images per day. Methods used: mathematical modeling, two-dimensional discrete cosine transform, image processing methods, decision-making methods, fuzzy mathematics. The following results were obtained. A fuzzy classifier for clustering distorted versions of images in large dynamic databases was developed. The experiments demonstrated that clustering distorted versions of images was sufficiently fast and cost-effective in terms of data volume and computational resource requirements. ROC analysis indicated the high quality of the developed fuzzy classifier.
Article Details
References
Khan, O. R. and Bhat, J. I. (2023), “Delving into the Depths of Image Retrieval Systems in the Light of Deep Learning: A Review”, Indian Journal of Science and Technology, vol. 16 (34), pp. 2693–2702, doi: https://doi.org/10.17485/IJST/v16i33.1341
Chung, H., Lee, N., Lee, H., Cho, Y. and Woo, J. (2023), “GuaRD: Guaranteed robustness of image retrieval system under data distortion turbulence”, PLoS One, vol. 18(9): e0288432, doi: http://dx.doi.org/10.1371/journal.pone.0288432
Garg, Shruti (2018), “A Study of Retrieval Methods of Multi-Dimensional Images in Different Domains”, International Journal of Advanced Computer Science and Applications, vol. 9 (10), pp. 342–352, doi: http://dx.doi.org/10.14569/IJACSA.2018.091041
Varish, N. (2022), “A modified similarity measurement for image retrieval scheme using fusion of color, texture and shape moments”, Multimed Tools, vol. 81, pp. 20373–20405, doi: https://doi.org/10.1007/s11042-022-12289-1
Shihao, Shao, Kaifeng, Chen, Arjun, Karpur, Qinghua, Cui, Andre, Araujo and Bingyi, Cao (2023), “Global Features are All You Need for Image Retrieval and Reranking”, Computer Vision and Pattern Recognition, pp. 1–13, doi: https://doi.org/10.48550/arXiv.2308.06954
Datta, R., Joshi, D., Li, J., and Wang, J. Z. (2008), “Image retrieval: Ideas, influences, and trends of the new age”, ACM Computing Surveys (CSUR), vol. 40 (2, article 5), pp. 1–60, doi: https://doi.org/10.1145/1348246.1348248
Zhang, D. and Lu, G. (2004), “Review of shape representation and description techniques”, Pattern Recognition, vol. 37 (1), pp. 1–19, doi: https://doi.org/10.1016/j.patcog.2003.07.008
Kamakshi, V. and Krishnan, N. C. (2023), “Explainable Image Classification: The Journey So Far and the Road Ahead”, AI, vol. 4 (3), pp. 620–651, doi: https://doi.org/10.3390/ai4030033
Rudin, C. (2019), “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead”, Nature Machine Intelligence, vol. 1 (5), pp. 206–215, doi: http://dx.doi.org/10.1038/s42256-019-0048-x
Isik, M. (2024), “Comprehensive empirical evaluation of feature extractors in computer vision”, PeerJ Computer Science, 10:e2415, doi: https://doi.org/10.7717/peerj-cs.2415
Wu, S., Oerlemans, A., Bakker, E.M. and Lew, M.S. (2017), “A comprehensive evaluation of local detectors and descriptors”, Signal Processing: Image Communication, vol. 59, pp. 150–167, doi: https://doi.org/10.1016/j.image.2017.06.010
Yilmaz, A. (2025), “Bilateral Cross Hashing Image Retrieval Based on Principal Component Analysis”, Research Article-Computer Engineering and Computer Science, pp. 1–18, doi: https://doi.org/10.1007/s13369-025-10135-8
Biswas, R. and Blanco-Medina, P. (2021), “State of the Art: Image Hashing”, Computer Vision and Pattern Recognition,
pp. 1–8, doi: https://doi.org/10.48550/arXiv.2108.11794
Singh, D., Mathew, J., Agarwal, M. and Govind M. (2023), “DLIRIR: Deep learning based improved Reverse Image Retrieval”, Engineering Applications of Artificial Intelligence, vol. 126 (A), article number 106833, doi: https://doi.org/10.1016/j.engappai.2023.106833
Kubany, A., Ben, Sh., Ruben, I., Ohayon, S., Shmilovici, A., Rokach, L. and Doitshman, T. (2020), “Comparison of state-of-the-art deep learning APIs for image multi-label classification using semantic metrics”, Expert Systems with Applications, vol. 161, 113656, doi: https://doi.org/10.1016/j.eswa.2020.113656
Shafiq, M. and Gu, Z. (2022), “Deep Residual Learning for Image Recognition: A Survey”, Applied Sciences, vol. 12(18), 8972, doi: https://doi.org/10.3390/app12188972
Zhu, Y., Xu, H., Du, A. and Wang, B. (2024), “Image–Text Matching Model Based on CLIP Bimodal Encoding”, Applied Sciences, vol. 14 (22), 10384, doi: https://doi.org/10.3390/app142210384
Filatov, V., Filatova, A., Povoroznyuk, A. and Omarov, S. (2024), “Image classifier for fast search in large databases”, Advanced Information Systems, vol. 8, no. 2, pp. 12–19, doi: https://doi.org/10.20998/2522-9052.2024.2.02
Dun B., Zakovorotnyi, O. and Kuchuk, N. (2023), “Generating currency exchange rate data based on Quant-Gan model”, Advanced Information Systems, vol. 7, no. 2, pp. 68–74, doi: http://dx.doi.org/10.20998/2522-9052.2023.2.10
Kamthan, Sh., Singh, H. and Meitzler, T. (2022), “Hierarchical fuzzy deep learning for image classification”, Memories - Materials, Devices, Circuits and Systems, vol. 2, 100016, doi: https://doi.org/10.1016/j.memori.2022.100016
Li, Z., Li, L., Yan, K. and Zhang, C. (2017), “Automatic image annotation using fuzzy association rules and decision tree”, Multimedia Systems, vol. 23, pp. 679–690, doi: https://doi.org/10.1007/s00530-016-0530-9