Abstract
The field of Explainable AI (XAI) has focused primarily on algorithms
that can help explain decisions and classification and help understand
whether a particular action of an AI system is justified. These
\emph{XAI algorithms} provide a variety of means for
answering a number of questions human users might have about an AI.
However, explanation is also supported by
\emph{non-algorithms}: methods, tools, interfaces, and
evaluations that might help develop or provide explanations for users,
either on their own or in company with algorithmic explanations. In this
article, we introduce and describe a small number of non-algorithms we
have developed. These include several sets of guidelines for
methodological guidance about evaluating systems, including both
formative and summative evaluation (such as the self-explanation
scorecard and stakeholder playbook) and several concepts for generating
explanations that can augment or replace algorithmic XAI (such as the
Discovery platform, Collaborative XAI, and the Cognitive Tutorial). We
will introduce and review several of these example systems, and discuss
how they might be useful in developing or improving algorithmic
explanations, or even providing complete and useful non-algorithmic
explanations of AI and ML systems.