- This event has passed.
Workshop series “Issues in XAI #4:” Explanatory AI: Between ethics and epistemology”
May 23 - May 25
See TUDelft.nl for more information on this event
The workshop focuses on the normative and epistemic aspects of explainable AI (XAI). The two are especially relevant to each other in this specific case. In fact, the goal of XAI is first and foremost epistemic in providing knowledge or understanding of the inner workings of AI models. Nevertheless, there are also relevant normative questions about transparency, responsibility, and accountability that interact with XAI. In this respect, the workshop aims at a synergy between epistemological concerns with non-epistemological ones (e.g., ethical, political, economic, societal). On the one hand, the epistemic status of XAI tools can help inform their role as a solution to non-epistemological/normative questions. If current XAI tools fail to provide understanding of the inner workings of AI models, e.g. yielding only limited knowledge of the importance of input features, what role can they play for facilitating meaningful human control? To what extent can they support human agency and clarify accountability questions? Being clearer on the epistemic status of users can yield more fine-grained answers to these philosophical questions. On the other hand, the normative questions can further inform what the appropriate epistemic goals are for (not yet developed) XAI tools. If the normative questions turn out to require a specific epistemic status with respect to the model that is used, then this can support epistemological discussions on how to reach that status. How is the explanatory logic for XAI that meets the epistemic and non-epistemic standards required from it? How do normative dimensions of epistemic notions impact the epistemological debate on XAI? This range of topics on the intersection of XAI is important and yet largely underdeveloped. With this workshop we hope to bring the two parts of philosophy closer together. Whereas the workshop will not be focused on one specific topic, there is special interest in medical AI.
We invite submissions from all related academic fields, including philosophy of (computer) science, epistemology, political and moral philosophy, political theory, legal theory, and social theory. Possible questions/topics include:
- The logic of scientific explanation for/in AI
- The epistemic and moral goods expected from explaining AI (e.g., understanding, knowledge, moral justification)
- Trustworthy AI: benefits and limits of Transparency, Accountability, Explainability, and Computational Reliabilism
- Which epistemic and non-epistemic values (social, economic, political, moral, etc.) are relevant for XAI, and to what extent do explanations in AI affect non-epistemic values?
- Are responsibility, accountability and contestability possible without XAI?
- What forms of backward- and forward-looking responsibility are tailored to XAI and notions of trustworthiness?
- How can forms of epistemic injustice (hermeneutical, testimonial, and otherwise) be ameliorated?
- This list is non-exhaustive, and submissions on related topics are welcome.
If you are interested in participating in this expert workshop, please submit an anonymized abstract of no more than 500 words, along with an email including your name, title, and affiliation to EasyChair (https://easychair.org/my/conference?conf=eexai2021). Participants will be asked to give a presentation (25 min + 20 Q&A) of their paper. All the abstracts will be invited to submit a full paper to a Special Issue, possibly in Ethics and Information Technology.
The workshop will take place in person at Delft University of Technology. It will also be online for participants that cannot travel.