Abstract
Cutting-edge methods in artificial intelligence (AI) have the ability to
significantly improve outcomes. However, the struggle to interpret these
black box models presents a serious problem to the industry. When
selecting a model, the decision to sacrifice accuracy for
interpretability must be made. In this paper, we consider a case study
on eye state detection using electroencephalogram (EEG) signals to
investigate how a deep neural network (DNN) model makes a prediction,
and how that prediction can be interpreted.