<< Back

Explainable Neural Networks: Transparency and Trust in Medical Diagnosis with Radiological Images: A Systematic Review. (#1355)

Read Article

Date of Conference

July 16-18, 2025

Published In

"Engineering, Artificial Intelligence, and Sustainable Technologies in service of society"

Location of Conference

Mexico

Authors

Paiva Sánchez, Miguel Hans

Mendoza Crisanto, Eduardo Aldair

Abstract

Advances in Explainable Artificial Intelligence (XAI) have transformed the field of medical diagnosis, addressing challenges related to interpretability and trust in deep learning models. This study conducts a systematic literature review (SLR) to explore how XAI has been applied in the analysis of radiological images, such as X-rays, computed tomography, and magnetic resonance imaging, with the goal of providing transparency in diagnostic outcomes. The analysis identified a predominance of post-hoc techniques, such as SHAP and LIME, alongside model-inherent approaches, including decision trees and neural networks with attention mechanisms. These tools have enhanced trust in AI systems by offering clear interpretations of algorithmic reasoning. However, significant gaps were identified in the standardization of evaluation metrics and the suitability of explanations for clinical professionals. XAI represents a critical step toward the widespread acceptance of artificial intelligence in medical diagnosis, addressing the challenges of deep learning model opacity, consolidating existing initiatives, and offering key recommendations for future research, with an emphasis on developing standardized metrics and explainability tools focused on the needs of medical professionals and their patients.

Read Article