CS-GAN: Cross-Structure Generative Adversarial Networks for Chinese calligraphy translation[Formula presented]

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Knowledge-Based Systems, 2021, 229
Issue Date:
2021-10-11
Filename Description Size
1-s2.0-S0950705121005967-main.pdfPublished version2.75 MB
Adobe PDF
Full metadata record
Generative Adversarial Networks (GANs) have made great progress in cross-domain image translation. In fact, image-to-image translation tasks often encounter structural differences in two domains, such as translation on unpaired Chinese calligraphy dataset. However, existing models can only convert color and texture features and keep the structures unchanged (e.g.: in apples to oranges tasks, these models only convert the color of apples, but maintain the shape of apples). In order to address cross-structure image translation, such as cross-structure translation of Chinese calligraphy, a novel Generative Adversarial Networks (GAN) model, named CS-GAN, is proposed in this paper. In CS-GAN, distribution transform, reparameterization trick and sampling features are used to convert feature maps obtained from domain S to domain T. Then images of domain T are generated through features concatenation. The proposed CS-GAN is verified on three sets of Chinese calligraphic data with structural differences from three famous calligraphers, Yan Zhenqing, Zhao Mengfu and Ouyang Xun. The extensive experimental results show that the proposed CS-GAN successfully transforms the Chinese calligraphy data of different structures and outperforms the state of art models.
Please use this identifier to cite or link to this item: