Liu Heng

and 3 more

Given a content image and an artistic style one, style transfer usually refers to applying the patterns learned from the style image to the content image to generate a new stylized image. Despite the thrilling success achieved by existing style transfer methods, most of them are in the mire of two limitations: 1) cannot preserve the structure of the content image well; 2) cannot generate delicate enough style effects or may produce significant artifacts. It is a challenge to maintain a balance between content structure preservation and style pattern transformation. In this work, we observe that multi-level content-style cross attention can extract the content features matching to the style characteristics at different feature levels. In addition, we also find that through multi-level dynamic normalization and alignment, hierarchical content-style cross attention can effectively transform the content image through the style characteristics of different levels while preserving its local structure and semantics as much as possible. The perceptual loss and the contextual loss are introduced individually to ensure the generated stylized image is close both to the content image and the style image in the feature space. At the same time, the identity loss of the content image and the style one is deployed to encourage the proposed model to retain the global appearance and the feature semantics of the input images without overall statistical deviation. A large number of qualitative and quantitative experiments and evaluations on the benchmark MSCOCO and WikiArt datasets demonstrate that, compared with other state-of-the-art (SOTA) methods, the proposed approach can obtain high-quality stylized images with structure-style balance. The project code is available at https://github.com/hengliusky/MNCAA.