Accountability in Medical Autonomous Expert Systems: ethical and epistemological challenges for explainable AI
Medical centres and general health-care systems are making a rapid and irreversible shift toward incorporating autonomous AI decision-making systems into their practice (traditionally known as ‘expert systems’). This NIAs-Lorentz theme-group project’s main aim is to address the epistemic and normative dimensions of explanatory expert systems.
About the Topic
Medical centres and general health-care systems are making a rapid and irreversible shift toward incorporating autonomous decision-making expert systems into their practice. Their importance for the future of health-care is very concrete, for they promise to advance into the analysis of medical evidence, providing fast recommendations on treatment, and render reliable diagnoses. Being able to explain the results of expert systems, their reliability and trustworthiness is of paramount importance for a morally admissible medical practice. Unfortunately, there is little understanding of how such an explanation is possible, and there is effectively no account of its structure. The best researchers have produced are classifications and weak predictions, none of which are capable of offering the right epistemological virtues (e.g., understanding) that are at the basis of the right moral action. Furthermore, if these expert systems were to be used in actual medical practice, they would be in flagrant violation of the Recital 71, Article 22 of the new GDPR that establishes the “right to explanation”.
This project will change the current state of our understanding of expert systems. We propose a novel approach that combines epistemological studies with ethical implications. Concretely, we propose to study the structure of explanation of expert systems (including notions such as opacity, trustworthiness, auditability, and reliability) and the ethical implications that follow from bona fide explanations (incomplete, poor, bad explanations thereof). Such ethical analysis includes the study of notions such as accountability, bias, discrimination and the like in the context of explainable expert systems.
Workshop
Workshop: Explainable Medical AI: Ethics, Epistemology, and Formal Methods 12 – 16 April 2021
Members
Juan M. Durán, Delft University of Technology
Sander Beckers, Ludwig Maximilian University, Munich
Giuseppe Primiero, University of Milan
Karin Jongsma, University Medical Center Utrecht
Martin Sand, Delft University of Technology