Xinwu Ye

and 6 more

Recent advancements in Graph Neural Networks (GNNs) have demonstrated their potential for various applications, such as social networks and financial networks. However, they also exhibit fairness issues, particularly in human-related decision contexts, which may lead to unfair treatment of groups that have historically been subject to discrimination. While several visual analytics studies have explored fairness in machine learning (ML), few have tackled the specific challenges posed by GNNs. We propose a visual analytics framework designed to analyze GNN fairness, offering insights into how attribute and structural biases introduce model bias. Our framework is model-agnostic and tailored for real-world scenarios with multiple and multinary sensitive attributes, utilizing an extended suite of fairness metrics. To operationalize the framework, we develop GNNFairViz, a visual analysis tool that integrates seamlessly into the GNN development workflow, offering interactive visualizations. Our tool allows users to analyze model biases comprehensively, facilitating node selection, fairness inspection, and diagnostics. We evaluate our approach through two usage scenarios and expert interviews, confirming its effectiveness and usability in GNN fairness analysis. Furthermore, we summarize two general insights into GNN fairness based on our observations on the usage of GNNFairViz, highlighting the prevalence of the "Overwhelming Effect" in highly unbalanced datasets and the importance of suitable GNN architecture selection for bias mitigation. Abstract content goes here