Autor: |
Wei Wei, Xia Yang, Xiaodong Duan, Chen Guo |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 11, Pp 65354-65370 (2023) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2023.3290102 |
Popis: |
Most of the tasks based on pose-guided person image synthesis have obtained accurate target pose, but still have not obtained reasonable style texture mapping. In this paper, we propose a new two-stage network to decouple style and content, which aims to enhance the accuracy of pose transfer and the realism of a person appearance. Firstly, we propose an Aligned Multi-scale Content Transfer Network(AMSNet) to predict the target edge map for pose content transfer in advance, which can not only preserve clearer texture content but also alleviate spatial misalignment through advancing to transfer pose information. Secondly, we propose a new Style Texture Transfer Network(STNet) to gradually transfer the source style features to the target pose to for reasonable distribution of styles. To achieve highly similar appearance texture to the source style, we use a style-content-aware adaptive normalization method. The source style features are mapped into the same latent space as aligned content images (target pose and edge), and consistency between style texture and content is enhanced through adaptive adjustment of source style and target pose. Experimental results show that the proposed model can synthesize target images consistent with the source style, achieving superior results both quantitatively and qualitatively. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|