Fundus photography (FP) is a crucial technique for diagnosing the progression of ocular and systemic diseases in clinical studies, with wide applications in early clinical screening and diagnosis. However, due to the non-uniform illumination and imbalanced intensity caused by various reasons, the quality of fundus images is often severely weakened, brings challenges for automated screening, analysis and diagnosis of diseases. To resolve this problem, we developed strongly constrained generative adversarial networks (SCGAN). The results demonstrate that the quality of various datasets were more significantly enhanced based on SCGAN, simultaneously more effectively retaining tissue and vascular information under various experimental conditions . Furthermore, the clinical effectiveness and robustness of this model was validated by showing its improved ability in vascular segmentation as well as disease diagnosis. Our study provides a new comprehensive approach for FP and also possesses the potential capacity to advance artificial intelligence-assisted ophthalmic examination.