Machine learning-enabled medical imaging analysis has become a vital part of the automatic diagnosis system. However, machine learning, especially deep learning models have been shown to demonstrate a systematic bias towards certain subgroups of people. For instance, they yield a preferential predictive performance to males over females, which is unfair and potentially harmful especially in healthcare scenarios. In this literature survey, we give a comprehensive review of the current progress of fairness studies in medical image analysis (MedIA) and healthcare. Specifically, we first discuss the definitions of fairness, the source of unfairness and potential solutions. Then, we discuss current research on fairness for MedIA categorized by fairness evaluation and unfairness mitigation. Furthermore, we conduct extensive experiments to evaluate the fairness of different medical imaging tasks. Finally, we discuss the challenges and future directions in developing fair MedIA and healthcare applications.