Jinzong Dong

and 7 more

Confidence calibration in classification models, a technique to achieve accurate posterior probability estimation for classification results, is crucial for assessing the likelihood of correct decisions in real-world applications. Class imbalance data, which biases the learning of the model and subsequently skews the posterior probabilities of the model, makes confidence calibration more challenging. Especially for often more important minority classes with high uncertainty, confidence calibration is more complex and necessary. Unlike previous surveys that typically separately investigate confidence calibration or class imbalance, this paper comprehensively investigates confidence calibration methods for deep learning-based classification models under class imbalance. Firstly, the problem of confidence calibration under class imbalance data is outlined. Secondly, a novel exploratory analysis regarding the impact of class imbalance data on confidence calibration is carried out, which can explain some experimental findings in existing studies. Then, this paper conducts a comprehensive review of 57 state-of-the-art confidence calibration methods under class imbalance data, divides these methods into six groups according to method differences, and systematically compares seven properties to evaluate their superiority. Subsequently, some commonly used and emerging evaluation methods in this field are summarized. Finally, we discuss several promising research directions that may serve as a guideline for future studies.