In remote sensing, automatic large-scale land cover mapping remains one of the most important problems. There is a vast amount of radar and multi-spectral imagery from satellites such as Sentinel-1 and Sentinel-2 of the European Space Agency's Copernicus satellite constellation program. Despite the abundance of satellite data, high-resolution, accurately labeled training data remains scarce in this field. This is mainly due to the high cost associated with the labeling of such huge amounts of high-resolution data. Recent works have leveraged existing global land cover products to train land cover mapping models. However, these products often have significant label noise and low resolution, limiting the achievable accuracy of the trained models. In solving this problem, a host of weak-supervision methods have been proposed, including the design of efficient loss functions. In this work, using the SEN12MS dataset and the DFC2020 dataset, we evaluate the performance of previously proposed weak supervision loss functions used for training deep learning models for land cover mapping. We also explore the use of ensembles of losses. We enumerate the performance of popular weak-supervision loss functions and the proposed ensembles. A variation of the backward loss correction approach is shown to perform the best in both datasets, followed by the cross-entropy loss and an ensemble loss composed of the cross-entropy, focal loss, and unhinged loss.