Artificial intelligence techniques including machine learning models have shown success in variety of domains. This is more evident with complex models including deep learning. However, such success accompanied by vagueness around how the models work, the internal mechanism and making a decision. Explainable Artificial intelligence (XAI) emerged as a new field of research to uncover the mystery around how the complex models work. The ultimate aim of XAI is to make the complex models more transparent, trustworthy and understandable even by lay-persons with no technical background. XAI could come in different forms including heatmaps applied to images, significant concepts to the model when making a decision, informative features with tabular data, one-feature effect on the outcome, fuzzy logic rules, textual explanation through images captioning, uncertainty quantification and much more. There are several factors affect the XAI forms including the used data and the model. This paper is dedicated to review and group the current XAI methods in the literature based on the outcome form. In addition, the paper discusses the XAI groups, how they work, strengths and weaknesses. Our paper shows that although e XAI methods have been used extensively within the research context, their employability in real life problems especially in sensitive domains might be unreasonable in the current stage.