This paper provides a definition of back-propagation through geometric correspondences for morphological neural networks. In addition, dilation layers are shown to learn probe geometry by erosion of layer inputs and outputs. A proof-of-principle is provided, in which predictions and convergence of morphological networks significantly outperform convolutional networks.