The widespread adoption of Artificial Intelligence (AI) models by various industries in recent years have made Explainable Artificial Intelligence (XAI) an active field of research. This adoption can cause trust and effectiveness to suffer if the results of these models are not favorable in some way. XAI has advanced to the point where many metrics have been proposed as reasons for the outputs of many AI models. However, there is little consensus about what technical metrics are most important, nor is there a consensus on how best to analyze explainable methods and models. A discussion of varying attempts at this is brought forth, but the paper also goes into the ethics of AI and its societal impact. Given the modern ubiquity with which AI exists and the immensely multidisciplinary approach, AI has evolved into, using only technical metrics cannot fully describe XAI’s effectiveness. This paper explores several approaches to measuring the ethical effects of XAI, whether it has any bearing on modern research, as well as how the impacts of AI and XAI are measured on society. The full attempt at quantifying XAI models’ effectiveness is explored from a technical and non-technical point of view.