Abstract
Due to the strong visual impact and color contrast inherent in woodcut
style design, it has been applied in animation and comics. However,
traditional woodcuts, hand-drawn, and previous computer-aided methods
have yet to address the issues of dwindling design inspiration, lengthy
production times, and complex adjustment procedures. We propose a novel
network framework, the Woodcut-style Design Assistant Network (WDANet),
to tackle these challenges. Notably, our research is the first to
utilize diffusion models to streamline the woodcut-style design process.
We curate the Woodcut-62 dataset, which features works from 62 renowned
historical artists, to train WDANet in absorbing and learning the
aesthetic nuances of woodcut prints, offering users a wealth of design
references. Based on a denoising network, our WDANet effectively
integrates text and woodcut-style image features. WDANet allows users to
input or slightly modify a text description to quickly generate
accurate, high-quality woodcut-style designs, saving time and offering
flexibility. As confirmed by user studies, quantitative and qualitative
analyses show that WDANet outperforms the current state-of-the-art in
generating woodcut-style images and proves its value as a design aid.