Abstract
This letter presents a novel self-supervised learning strategy to
improve the robustness of a monocular depth estimation (MDE) network
against motion blur. Motion blur, a common problem in real-world
applications such as autonomous driving and scene reconstruction, often
hinders accurate depth perception. Conventional MDE methods are
effective under controlled conditions but struggle to generalise their
performance to blurred images. To address this problem, we generate
blur-synthesized data to train a blur-robust MDE model without the need
for preprocessing such as deblurring. By incorporating self-distillation
techniques and using blur-synthesised data, we significantly enhance
depth estimation accuracy for blurred images without any additional
computational or memory overhead. Extensive experimental results
demonstrate the effectiveness of the proposed method, enhancing existing
MDE models to accurately estimate depth information across various blur
conditions.