Synthetic data generation offers a promising solution to enhance the usefulness of Electronic Healthcare Records (EHR) by generating realistic de-identified data. However, the existing literature primarily focuses on the quality of synthetic health data, neglecting the crucial aspect of fairness in downstream predictions. Consequently, models trained on synthetic EHR have faced criticism for producing biased outcomes in target tasks. These biases can arise from either spurious correlations between features or the failure of models to accurately represent subgroups. To address these concerns, we present Bias-transforming Gen-erative Adversarial Networks (Bt-GAN), a GAN-based synthetic data generator specifically designed for the healthcare domain. In order to tackle spurious correlations (i), we propose an information-constrained Data Generation Process (DGP) that enables the generator to learn a fair deterministic transformation based on a well-defined notion of algorithmic fairness. To overcome the challenge of capturing exact subgroup representations (ii), we incentivize the generator to preserve subgroup densities through score-based weighted sampling. This approach compels the generator to learn from underrepresented regions of the data manifold. To evaluate the effectiveness of our proposed method, we conduct extensive experiments using the Medical Information Mart for Intensive Care (MIMIC-III) database. Our results demonstrate that Bt-GAN achieves state-of-the-art accuracy while significantly improving fairness and minimizing bias amplification. Furthermore, we perform an in-depth explainability analysis to provide additional evidence supporting the validity of our study. In conclusion, our research introduces a novel and professional approach to addressing the limitations of synthetic data generation in the healthcare domain. By incorporating fairness considerations and leveraging advanced techniques such as GANs, we pave the way for more reliable and unbiased predictions in healthcare applications.
Explainable Artificial Intelligence (XAI) and Fair Learning have made significant strides in various application domains, including criminal recidivism predictions, healthcare settings, toxic comment detection, automatic speech detection, recommendation systems, and image segmentation. However, these two fields have largely evolved independently. Recent studies have demonstrated that incorporating explanations into decision-making processes enhances the transparency and trustworthiness of AI systems. In light of this, our objective is to conduct a systematic review of FairXAI, which explores the interplay between fairness and explainability frameworks. To commence, we propose a taxonomy of FairXAI that utilizes XAI as a means to both mitigate and evaluate bias. This taxonomy will serve as a base for machine learning researchers operating in diverse domains. Additionally, we will undertake an extensive review of existing articles, taking into account factors such as the purpose of the interaction, target audience, and domain and context. Moreover, we outline an interaction framework for FairXAI considering various fairness perceptions and propose a FairXAI wheel that encompasses four core properties that must be verified and evaluated. This will serve as a practical tool for researchers and practitioners, ensuring the fairness and transparency of their AI systems. Furthermore, we will identify challenges and conflicts in the interactions between fairness and explainability, which could potentially pave the way for enhancing the responsibility of AI systems. As the inaugural review of its kind, we hope that this survey will inspire scholars to address these challenges by scrutinizing current research in their respective domains.Â