Title
What should we explain with explainable AI?
Abstract
Explaining opaque models is important to serve various practical or moral ends. XAI – both the practice and the philosophy thereof – seems to be out of step with best practice in science communication and evidence-based policy. While these fields aim to explain some aspect of the world using scientific models, debates on XAI assume that the model itself is, first and foremost, the correct target of the explanation. We examine that assumption, using the example of explanations of lending decisions. Consumer credit is a fruitful case study for the moral importance of explanation because has one of the best regulated explanatory requirements across domains, and the ways in which those explanations fall short are instructive. We argue that, in the domain of consumer lending, XAI is correct to focus on explanations of how the model works, due to the instrumental value of credit. The argument for this conclusion sets up an overlooked issue about its right to explanation: its distributional consequences. The right to explanation seems to be objectionably inegalitarian, at least in the case of consumer credit.
About Kate
Kate is Associate Professor in the Department of Philosophy, Logic and Scientific Method at the London School of Economics. She works on questions across the philosophy of social science, political philosophy, and the philosophy of AI.
From 2024-2028, she will be investigating AI, worker autonomy, and the future of work with funding from a UKRI Future Leaders Fellowship Grant.