TAM GAN: Tamil Text to Naturalistic Image Synthesis Using Conventional Deep Adversarial Networks
Autor: | Diviya M, Karmel A |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | ACM Transactions on Asian and Low-Resource Language Information Processing. 22:1-18 |
ISSN: | 2375-4702 2375-4699 |
DOI: | 10.1145/3584019 |
Popis: | Text-to-image synthesis has advanced recently as a prospective area for improvement in computer vision applications. The image synthesis model follows significant neural network architectures such as Generative Adversarial Networks (GANs). The flourishing text-to-image generation approaches can nominally reflect the meaning of the text in generated images. Still, they need the prospect of providing the necessary details and eloquent object features. Intelligent systems are trained in text-to-image synthesis applications for various languages. However, their contribution to regional languages is yet to be explored. Autoencoders prompt the synthesis of images, but they result in blurriness, which results in clear output and essential features of the picture. Based on textual descriptions, The GAN model is capable of producing realistic images of a high quality that can be used in various applications, like fashion design, photo editing, computer-aided design, and educational platforms. The proposed method uses two-stage processing to create a language model using a BERT model called TAM-BERT and an existing MuRIL BERT, followed by image synthesis using a GAN. The work was conducted using the Oxford-102 dataset, and the model's efficiency was evaluated using the F1-Score measure. |
Databáze: | OpenAIRE |
Externí odkaz: |