The Antidote project or explainable AI
Date:
Changed on 08/09/2022
This issue is particularly important in the context of AI applications for medical diagnosis, where it is important to know how the algorithm made the decision.
For this reason, we decided to respond to the CHIST-ERA call with a project that highlighted our skills in natural language argumentation in collaboration with physicians, exploiting our results in the automatic analysis of argumentation in clinical trial abstracts on PubMed with our ACTA tool.
The Antidote project is coordinated by the Wimmics project-team, in partnership mainly with universities:
In view of our ongoing collaboration on different topics in AI and natural language processing, the choice to co-coordinate this project came naturally.
This allows us to jointly address the issues and challenges that the coordination of a project of this magnitude can raise.
Providing high-quality explanations for AI predictions based on machine learning is a difficult and complex task.
To be effective, it requires, among other things:
The goal is for the system to be able to formulate the explanation in a clearly interpretable, even convincing, manner.
Given these considerations, the Antidote project promotes an integrated view of explainable AI (XAI), where low-level features of the deep learning process are combined with higher-level patterns of human argumentation.
The Antidote project is based on three considerations:
Antidote will develop an Explainable AI focused on argumentation with revolutionary "integration skills" that can work synergistically with humans, explaining its results in a way that humans can trust, while taking advantage of the ability of AIs to learn from data.
Antidore will engage users in explanatory dialogues, allowing them to argue with the AI in natural language. The application area of the Antidote project aims to impact mainly medical education, to train students to provide clear explanations to justify their diagnosis.