Personalized hairstyle and hair color editing based on multi-feature fusion.

Autor: Xu, Jiayi, Zhang, Chenming, Zhu, Weikang, Zhang, Hongbin, Li, Li, Mao, Xiaoyang
Předmět:
Zdroj: Visual Computer; Jul2024, Vol. 40 Issue 7, p4751-4763, 13p
Abstrakt: In the metaverse era, virtual design of hairstyle becomes very popular for personalized aesthetics. As hair design tasks can be decomposed into hair attribute editing and generation, the development of generative adversarial networks (GANs) has significantly prompted its development. The majority of the existing algorithms focus on transferring the overall hair region from one face to another, which ignore fine control over the color and geometric features. Furthermore, these algorithms may result in unnatural generation results. In this paper, we propose a hair modification framework that learns hairstyle information from a reference face mask and color information from a guidance face image. Firstly, the features of the input face image and reference images are extracted through a group of encoders, and then divided into feature vectors of coarse, medium, and fine levels. Secondly, multi-level feature vectors are fused in the latent space using attention-based modulation modules. Finally, the fused feature vector is passed through a StyleGAN generator to generate face images with specified hairstyle and hair color. Experimental results show that the proposed method can finely simulate the hairstyle transition between long and short hair under the constraint of the reference mask, and can produce realistic fusion effects in the hair-covered regions, such as ears, neck, and forehead. Various hair dyeing effects that adapt to personalized characteristics are demonstrated, as facial features including skin color and hair texture are preserved when transferring the hair color. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index