Mammography is a popular diagnostic imaging procedure for detecting breast cancer at an early stage. Various deep learning (DL) approaches to breast cancer detection incur high costs and are prone to classify incorrectly. Therefore, they are not sufficiently reliable to replace existing techniques used by medical practitioners. Specifically, these DL approaches do not exploit the complex texture patterns and interactions in mammograms. These DL approaches need labelled data to enable learning, limiting the scalability of these methods because of sufficient labelled datasets. Further, these DL models lack generalisation capability to newly-synthesised patterns/textures. To address these problems, in the first instance, we design a graph model to transform the mammogram images into highly correlated multi-graphs, encoding rich structural relations and high-level texture features. Then, we consider a self-supervised learning multi-graph encoder (SSL-MG) to improve the features presentation, especially when limited labelled data is available. Finally, we design a semi-supervised mammogram multi graph convolution neural network downstream model (MMGCN) to perform multi classification of mammogram segments encoded in the multi-graph nodes. We evaluate the classification performance of MMGCN independently and with integration with SSL-MG in a model called SSL-MMGCN over several training settings. Our results reveal the efficient learning performance of SSL-MNGCN and MMGCN with 0.97 and 0.98 AUC classification accuracy in contrast to the multitask deep graph (GCN) method of Hao Du et al. (2021) with 0.81 AUC accuracy.