During the last five years, graph auto-encoders became popular unsupervised methods, based on graph neural networks, to learn a node embedding from a graph. Researchers train graph auto-encoders by optimizing reconstruction losses that are computed from the connected node pairs (the edges) and non-connected node pairs of the graph. Many graphs being sparse, researchers often positively reweight the edges in these reconstruction losses. In this paper, we report an analysis of the effect of edge reweighting on the node embedding. We show that, on a link prediction problem, results are quite insensitive to edge reweighting, with the exception of very unbalanced reconstruction losses. We also discuss whether training models from perfectly balanced reconstruction losses is optimal or suboptimal, in terms of average scores and of standard deviations.