Autor: |
Shinde, Sandip, Joshi, Atharva, Jaiswal, Ansh, Jain, Sanskar, Jain, Sahil, Sharma, Mridul |
Předmět: |
|
Zdroj: |
Grenze International Journal of Engineering & Technology (GIJET); 2023, Vol. 9 Issue 2, p318-322, 6p |
Abstrakt: |
The difficult issue of text-to-face (TTF) synthesis holds enormous promise for numerous computer vision applications. Due to the multiplicity of facial features and the parsing of high dimensional abstract natural language, textual descriptions of faces can be far more intricate and detailed than Text-to-Image (TTI) synthesis jobs. Due to lack of dataset, research work conducted on models performance in text to face generation is quite limited. In this paper We have proposed a DCGAN Text-To-Face model which produces images with 256x256 resolution. Experimental results show that the DCGAN model is able to generate high quality images using multiple layers of deep convolutional neural networks. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|