Qualitative investigation in explainable artificial intelligence: A bit
more insight from social science
Abstract
We present a focused analysis of user studies in explainable artificial
intelligence (XAI) entailing qualitative investigation. We draw on
social science corpora to suggest ways for improving the rigor of
studies where XAI researchers use observations, interviews, focus
groups, and/or questionnaires to capture qualitative data. We
contextualize the presentation of the XAI papers included in our
analysis according to the components of rigor described in the
qualitative research literature: 1) underlying theories or frameworks,
2) methodological approaches, 3) data collection methods, and 4) data
analysis processes. The results of our analysis support calls from
others in the XAI community advocating for collaboration with experts
from social disciplines to bolster rigor and effectiveness in user
studies.