gms | German Medical Science

68. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (GMDS)

17.09. - 21.09.23, Heilbronn

The Normativity of Explainability in Healthcare: A Mapping Review

Meeting Abstract

Search Medline for

  • Nils Freyer - FH Aachen University of Applied Sciences, Aachen, Germany
  • Myriam Lipprandt - Medical Faculty, RWTH Aachen University, Aachen, Germany
  • Matthias Meinecke - FH Aachen University of Applied Sciences, Aachen, Germany

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie. 68. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (GMDS). Heilbronn, 17.-21.09.2023. Düsseldorf: German Medical Science GMS Publishing House; 2023. DocAbstr. 296

doi: 10.3205/23gmds147, urn:nbn:de:0183-23gmds1477

Published: September 15, 2023

© 2023 Freyer et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

Introduction: Despite the continuous improvements in performance, especially in clinical contexts, a major pitfall of AI-DSS remains their epistemic opacity [1], [2], [3]. Thus, the handling of supposedly unexplainable technology in healthcare is an active field of research for both technical and ethical scholars. This work presents a mapping review of the literature on the normativity of explainability of AI in healthcare, with the goal to provide an overview of the current debate for ethicists, developers, and practitioners of AI-DSS [4], [5]. To do so, we provide a case study to incorporate the findings into the AI-DSS development process.

Methods: We conducted a literature search on PubMed, BASE, and Scopus for English documents between 2016 and 2023. The inclusion criterion was to discuss the normativity of at least one notion of explainability as “explainability”, “explicability”, “interpretability”, “contestability”, “transparency”, “black box” or “understandability”. Surveys, systematic reviews, and technical or empirical investigations in explainability were excluded from the review. The epistemic assumptions, normative motivation, and normative and technical requirements outlined in the documents were qualitatively categorized and demonstrated in a case study.

Results The literature search resulted in 1320 documents. After merging duplicates, we obtained 882 documents, of which 80 documents were found relevant in an abstract screening.

First, we find that the normative requirements on explainability of AI in healthcare can be categorized according to (a) their underlying epistemic assumptions on the decision-making process of AI-DSS and clinical experts, and (b) the normative stance and thus, perspective, taken by the respective scholars.

Second, we identified three major positions on the normativity of explainability in healthcare.

1.
A lack of explainability is not morally justifiable, AI-DSS must be explainable.
2.
A lack of explainability is morally justifiable, AI effectivity must be sufficiently validated.
3.
AI-DSS must be at least as explainable as the human counterpart in decision-making, its effectivity needs to be sufficiently validated.

Discussion As an important source of disagreement on the normative judgments on explainability of AI in healthcare, epistemological assumptions on AI and human explanations were identified. Given that, empirical research on the matter may inform the debate better and be able to resolve the foundational disagreements. However, normative stances will remain a source of disagreement and must be considered when deploying AI-DSS in healthcare.

Moreover, a mapping of existing solutions to the identified normative requirements of explainability would be desirable and might identify specific research gaps for AI-DSS developers in healthcare.

Conclusion The presented mapping review informs developers, practitioners, and scholars of ethics alike. The identified epistemic assumptions may form a research agenda to inform the debate with well-founded empirical research. To ethicists, this review may provide an overview of the current state of the debate. Developers and practitioners in the field may use this review to get an overview of recommended actions given different normative stances and applications. Finally, this review may help translate normative requirements into policies and technological requirements.

The authors declare that they have no competing interests.

The authors declare that an ethics committee vote is not required.


References

1.
Tsamados A, Aggarwal N, Cowls J, Morley J, Roberts H, Taddeo M, et al. The ethics of algorithms: key problems and solutions. AI & Soc. 2022 Mar;37(1):215–30.
2.
London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep. 2019;49(1):15–21.
3.
Bjerring JC, Busch J. Artificial Intelligence and Patient-Centered Decision-Making. Philos Technol. 2021;34(2):349–71.
4.
Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). Comput Methods Programs Biomed. 2022 Nov 1;226:107161.
5.
Chaddad A, Peng J, Xu J, Bouridane A. Survey of Explainable AI Techniques in Healthcare. Sensors. 2023 Jan;23(2):634.