loading page

Are Linear Regression Models White Box and Interpretable?
  • Ahmed M Salih,
  • Yuhe Wang
Ahmed M Salih
Department of Population Health Sciences, University of Leicester, University Rd, William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Barts Heart Centre, St Bartholomew's Hospital, Barts Health NHS Trust, Department of Computer Science, University of Zakho

Corresponding Author:[email protected]

Author Profile
Yuhe Wang
Department of Population Health Sciences, University of Leicester, University Rd

Abstract

Explainable artificial intelligence (XAI) is a set of tools and algorithms that applied or embedded to machine learning models to understand and interpret the models. They are recommended especially for complex or advanced models including deep neural network because they are not interpretable from human point of view. On the other hand, simple models including linear regression are easy to implement, has less computational complexity and easy to visualize the output. The common notion in the literature that simple models including linear regression are considered as "white box" because they are more interpretable and easier to understand. This is based on the idea that linear regression models have several favorable outcomes including the effect of the features in the model and whether they affect positively or negatively toward model output. Moreover, uncertainty of the model can be measured or estimated using the confidence interval. However, we argue that this perception is not accurate and linear regression models are not easy to interpret neither easy to understand considering common XAI metrics and possible challenges might face. This includes linearity, local explanation, multicollinearity, covariates, normalization, uncertainty, features contribution and fairness. Consequently, we recommend the so-called simple models should be treated equally to complex models when it comes to explainability and interpretability.