UGSC-GAN: User-guided sketch colorization with deep convolution generative adversarial networks

Authors:Zhang, Junsong*; Zhu, Shaoqiang; Liu, Kunxiang; Liu, Xiaoyu
Source:Computer Animation and Virtual Worlds, 2022, 33(1): e2032.
DOI:10.1002/cav.2032

Summary

Inspired by the creating process of human paintings, we propose a novel adversarial architecture for multiple sketch colorization which is a scribble-based, automatic, and exemplar-based colorization method. The proposed framework has two stages, namely, imitating stage and shading stage. In the imitating stage, to address the challenge of lack of texture in the sketch, we train a grayscale generation network to accomplish a mapping task, namely, generating a grayscale map with textured, grayscale, boundary information from the input sparse sketch. In the shading stage, the model can accurately colorize the objects in the gray image generated in the previous stage, and generate high-quality colorized images. With the proposed model trained on our database, the experimental results show that our method can generate vivid colorized images and achieve a better performance than previous methods evaluated by FID metric.

  • Institution
    厦门大学

Full-Text