Reframing Explanation as an Interactive Medium: The EQUAS (Explainable
QUestion Answering System) Project
Abstract
This letter provides a retrospective analysis of our team’s research
performed under the DARPA Explainable Artificial Intelligence (XAI)
project. We began by exploring salience maps, English sentences, and
lists of feature names for explaining the behavior of
deep-learning-based discriminative systems, especially visual question
answering systems. We demonstrated limited positive effects from
statically presenting explanations along with system answers – for
example when teaching people to identify bird species. Many XAI
performers were getting better results when users interacted with
explanations. This motivated us to evolve the notion of explanation as
an interactive medium – usually, between humans and AI systems but
sometimes within the software system. We realized that interacting via
explanations could enable people to task and adapt ML agents. We added
affordances for editing explanations and modified the ML system to act
in accordance with the edits to produce an interpretable interface to
the agent. Through this interface, editing an explanation can adapt a
system’s performance to new, modified purposes. This deep tasking,
wherein the agent knows its objective and the explanation for that
objective will be critical to enable higher levels of autonomy.