Academic Journal

A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine.

Bibliographic Details
Title: A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine.
Authors: González-Alday, Raquel, García-Cuesta, Esteban, Kulikowski, Casimir A., Maojo, Victor
Source: Applied Sciences (2076-3417); Oct2023, Vol. 13 Issue 19, p10778, 23p
Abstract: Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box algorithms for making decisions affecting patients pose a challenge even beyond their accuracy. Recent advances in AI increasingly emphasize the necessity of integrating explainability into these systems. While most traditional AI methods and expert systems are inherently interpretable, the recent literature has focused primarily on explainability techniques for more complex models such as deep learning. This scoping review critically analyzes the existing literature regarding the explainability and interpretability of AI methods within the clinical domain. It offers a comprehensive overview of past and current research trends with the objective of identifying limitations that hinder the advancement of Explainable Artificial Intelligence (XAI) in the field of medicine. Such constraints encompass the diverse requirements of key stakeholders, including clinicians, patients, and developers, as well as cognitive barriers to knowledge acquisition, the absence of standardised evaluation criteria, the potential for mistaking explanations for causal relationships, and the apparent trade-off between model accuracy and interpretability. Furthermore, this review discusses possible research directions aimed at surmounting these challenges. These include alternative approaches to leveraging medical expertise to enhance interpretability within clinical settings, such as data fusion techniques and interdisciplinary assessments throughout the development process, emphasizing the relevance of taking into account the needs of final users to design trustable explainability methods. [ABSTRACT FROM AUTHOR]
Subject Terms: DEEP learning, ARTIFICIAL intelligence, EXPERT systems, PATIENT decision making, MULTISENSOR data fusion
Copyright of Applied Sciences (2076-3417) is the property of MDPI and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
ISSN: 20763417
DOI: 10.3390/app131910778
Database: Complementary Index