EXPLAINABLE ARTIFICIAL INTELLIGENT METHOD GRAD-CAM IN MEDICAL IMAGES PROCESSING

Main Article Content

Alexandra Čižmárová
Kristína Dostálová
Patrik Hrkut
Anton Poroshenko

Abstract

The reliability of modern deep learning models in the medical domain is frequently questioned due to their black-box nature. Post-hoc explainability techniques from the field of explainable artificial intelligence (XAI) offer a means to improve transparency and assess the reliability of predictions produced by convolutional neural networks. The research aims to investigate how XAI methods, specifically Gradient-weighted Class Activation Mapping (Grad-CAM), can provide reliable explanations for medical image classification. For this purpose, MRI images of brain were used to train a convolutional neural network to categorize the four stages of dementia in Alzheimer's disease. To make each prediction transparent, the areas of the brain which the trained network used to make the categorization on were highlighted using Grad-CAM. The resulting relevance maps, heatmaps, were evaluated using two approaches: spatial comparison with anatomically defined brain regions associated with Alzheimer’s disease using atlas overlay, and quantitative faithfulness assessment using a deletion-based metric, where highly influential regions identified by Grad-CAM were progressively removed and the impact on classification confidence was measured.

Article Details

How to Cite
Čižmárová , A. ., Dostálová , K. ., Hrkut, P. ., & Poroshenko , A. . (2026). EXPLAINABLE ARTIFICIAL INTELLIGENT METHOD GRAD-CAM IN MEDICAL IMAGES PROCESSING. Advanced Information Systems, 10(2), 79–86. https://doi.org/10.20998/2522-9052.2026.2.09
Section
Intelligent information systems
Author Biographies

Alexandra Čižmárová , University of Žilina, Žilina, Slovakia

Postgraduate Student at the Department of Informatics, Faculty of Management Science and Informatics

Kristína Dostálová , University of Žilina, Žilina, Slovakia

Postgraduate Student at the Department of Informatics, Faculty of Management Science and Informatics

Patrik Hrkut, University of Žilina, Žilina, Slovakia

Associate Professor, Head of the Department of Software Technologies, Faculty of Management Science and Informatics

Anton Poroshenko , Kharkiv National University of Radio Electronics, Kharkiv, Ukraine

PhD, Senior Lecturer of the Department of Electronic Computers

References

Uhryn, D. I., Ushenko, Y. O.,; Karachevtsev, A. O. and Halin, Y. O. (2026), “Comparative Evaluation of Deep Neural Networks for Brain Tumor Classification from Magnetic Resonance Imaging”, Herald of Advanced Information Technology, vol. 9, no. 1, pp. 100–113, doi: https://doi.org/10.15276/hait.09.2026.08

Zaitseva, E., Levashenko, V., Rabcan, J. and Kvassay, M. (2023), “A New Fuzzy-Based Classification Method for Use in Smart/Precision Medicine”, Bioengineering, vol. 10(7), no. 838, doi: https://doi.org/10.3390/bioengineering10070838

Chen, C., Isa, N.A.M. and Liu, X. (2025), “A review of convolutional neural network based methods for medical image classification”, Computers in Biology and Medicine, vol. 185, doi: https://doi.org/10.1016/j.compbiomed.2024.109507

Lien, W.-C., Yeh, C.-H., Chang, C.-Y., Chang, C.-H., Wang, W.-M., Chen, C.-H. and Lin, Y.-C. (2023), “Convolutional Neural Networks to Classify Alzheimer’s Disease Severity Based on SPECT Images: A Comparative Study”, Journal of Clinical Medicine, vol. 12, article number 2218, doi: https://doi.org/10.3390/jcm12062218

Purwono, P., Wulandari, A. N. E. and Nisa, K. (2025), “Explainable artificial intelligence (XAI) in medical imaging: Techniques, applications, challenges, and future directions”, Advanced Mechanical and Mechatronic Systems, vol. 1(1), pp. 52–66, doi: https://doi.org/10.53623/amms.v1i1.692

Zaitseva, E. and Levashenko, V. (2026), “Reliability engineering in healthcare: Opportunities and challenges”, Reliability Engineering and System Safety, vol. 267, article no. 111933, doi: https://doi.org/10.1016/j.ress.2025.111933

Cheng, Z., Wu, Y., Li, Y., Cai, L. and Ihnaini, B. (2025), “A comprehensive review of explainable artificial intelligence (XAI) in computer vision”, Sensors, vol. 25, article no. 4166, doi: https://doi.org/10.3390/s25134166

Zaitseva, E., Rabcan, J., Levashenko, V. and Kvassay, M. (2023), “Importance analysis of decision-making factors based on fuzzy decision trees”, Applied Soft Computing, vol. 134, article no. 109988, doi: https://doi.org/10.1016/j.asoc.2023.109988

Ortigossa, E. S., Gonçalves, T. and Nonato, L. G. (2024), “Explainable artificial intelligence (XAI) - From theory to methods and applications”, IEEE Access, vol. 12, pp. 80799–80840, doi: https://doi.org/10.1109/ACCESS.2024.3409843

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Chatila, R. and Herrera, F. (2020), “Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, vol. 58, pp. 82–115, doi: https://doi.org/10.1016/j.inffus.2019.12.012

Fayyaz, A.M., Abdulkadir S,J., Talpur, N., Al-Selwi, S. M., Hassan, S. U. and Sumiea, E. H. (2025), “Grad-CAM (Gradient-weighted Class Activation Mapping): A systematic literature review”, Computers in Biology and Medicine, vol. 198, Part B, 111200, doi: https://doi.org/10.1016/j.compbiomed.2025.111200

Ozer, C., Guler, A., Cansever, A. T. and Oksuz, I. (2026), “Consistent explainable image quality assessment for medical imaging”. Health Information Science and Systems, vol. 14, 31, 2026, doi: https://doi.org/10.1007/s13755-025-00411-0

Arun N, Gaw N, Singh P, Chang K, Aggarwal M, Chen B, Hoebel, K., Gupta, S., Patel, J., Gidwani, M., Matthew, J. A. and Kalpathy-Cramer, J. (2021), “Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging.” Radiology: Artificial Intelligence. vol. 3, no. 6, article no. 200267, 2021, doi: https://doi.org/10.1148/ryai.2021200267

Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S. Q. H., Nguyen, C. D. T., Ngo, V.-D., Seekins, J., Blankenberg, F. G., Ng, A. Y., Lungren, M. P. and Rajpurkar, P. (2022), “Benchmarking saliency methods for chest X‑ray interpretation”, Nature Machine Intelligence, vol. 4, pp. 867–878, doi: https://doi.org/10.1038/s42256-022-00536-x

Uraninjo. (2022). Augmented Alzheimer MRI Dataset for Better Results on Models, Kaggle, available at: https://www.kaggle.com/datasets/uraninjo/augmented-alzheimer-mri-dataset

Lawrence, R. M., O’Toole, C. M., Duffy, B., G. C. Arvapalli, S. C. Ramachandran, D. A. Pisner, P. F. Frank, A. D. Lemmer, A. Nikolaidis and J. T. Vogelstein (2021), “Standardizing human brain parcellations”, Scientific Data, vol. 8, no. 98, doi: https://doi.org/10.1038/s41597-021-00849-3

Nowinski, W. L. (2020), “Evolution of Human Brain Atlases in Terms of Content, Applications, and Visualization. Brain Imaging and Behavior”, Neuroinformatics, vol. 19 (1), pp.1–22, doi: https://doi.org/10.1007/s12021-020-09481-9

Sengupta, D., Gupta, P. and Biswas, A. (2022), “A survey on mutual information based medical image registration”, Neurocomputing, vol. 486, pp. 174-188, doi: https://doi.org/10.1016/j.neucom.2021.11.023

Woodworth, D. C., et al. (2022), “Dementia is strongly associated with medial temporal atrophy even after accounting for neuropathologies.” Brain Communications, vol. 4(2), doi: https://doi.org/10.1093/braincomms/fcac052

Rosa-Neto, P. (2021), “Chapter 9. Brain imaging using CT and MRI”, Alzheimer’s Disease International, available at: https://www.alzint.org/u/World-Alzheimer-Report-2021-Chapter-09.pdf

Forno, G., Saranathan, M., Contador, J., Guillen, N., Falgàs, N., Tort-Merino, A., Balasa, M., Sanchez-Valle, R., Hornberger, M. and Lladó, A. (2023), “Thalamic nuclei changes in early and late onset Alzheimer’s disease”, Current Research in Neurobiology, vol. 4, article number 100084, doi: https://doi.org/10.1016/j.crneur.2023.100084

Biesbroek, J. M., Verhagen, M. G., van der Stigchel, S. and Biessels, G. J. (2024), “When the central integrator disintegrates: A review of the role of the thalamus in cognition and dementia”, Alzheimer’s & Dementia, vol. 20, pp.2209–2222, doi: https://doi.org/10.1002/alz.13563

Gomez, T., Fréour, T. and Mouchère, H. (2022), “Metrics for saliency map evaluation of deep learning explanation methods”, arXiv preprint arXiv:2201.13291, doi: https://arxiv.org/abs/2201.13291

Hedström, A., Weber, L., Bareeva, D., Krakowczyk D., Motzkus F., Samek W., Lapuschkin S. and Höhne M. M.-C. (2023), “Quantus: An explainable AI toolkit for responsible evaluation of neural network explanations”, Journal of Machine Learning Research, vol. 24, pp. 1–11, doi: https://doi.org/10.48550/arXiv.2202.06861

Nieradzik, L., Zięba, M., & Wróbel, K. (2024), “Reliable evaluation of attribution maps in CNNs: A Perturbation-Based Approach”, International Journal of Computer Vision, vol. 133, pp. 2392–2409, doi: https://doi.org/10.48550/arXiv.2411.14946