It is known that pixel-based loss functions badly correspond with human perception of image quality. Therefore, in the field of image deblurring, a perceptual loss term is used for better visual reconstruction results. However, it has been shown that this can lead to visible artifacts. Current literature has not yet sufficiently investigated why these artifacts are generated. In this work, we tackle this lack of research and provide new insights into the generation of such artifacts. We not only show that artifact generation is caused by the feature map used for the perceptual loss, but we also show that the level of distortion in the input impacts the extent of the artifact generation. Furthermore, we show that common metrics are insensitive to these artifacts, and we propose a method to reduce the extent of the generated artifacts.