Explainable Artificial Intelligence (XAI) and Fair Learning have made significant strides in various application domains, including criminal recidivism predictions, healthcare settings, toxic comment detection, automatic speech detection, recommendation systems, and image segmentation. However, these two fields have largely evolved independently. Recent studies have demonstrated that incorporating explanations into decision-making processes enhances the transparency and trustworthiness of AI systems. In light of this, our objective is to conduct a systematic review of FairXAI, which explores the interplay between fairness and explainability frameworks. To commence, we propose a taxonomy of FairXAI that utilizes XAI as a means to both mitigate and evaluate bias. This taxonomy will serve as a base for machine learning researchers operating in diverse domains. Additionally, we will undertake an extensive review of existing articles, taking into account factors such as the purpose of the interaction, target audience, and domain and context. Moreover, we outline an interaction framework for FairXAI considering various fairness perceptions and propose a FairXAI wheel that encompasses four core properties that must be verified and evaluated. This will serve as a practical tool for researchers and practitioners, ensuring the fairness and transparency of their AI systems. Furthermore, we will identify challenges and conflicts in the interactions between fairness and explainability, which could potentially pave the way for enhancing the responsibility of AI systems. As the inaugural review of its kind, we hope that this survey will inspire scholars to address these challenges by scrutinizing current research in their respective domains.Â