Visual-Relation Conscious Image Generation from Structured-Text
Autor: | Duc Minh Vo, Akihiro Sugimoto |
---|---|
Rok vydání: | 2020 |
Předmět: |
Structure (mathematical logic)
Image generation Structured text Theoretical computer science Relation (database) Stack (abstract data type) Computer science 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing 02 engineering and technology 010501 environmental sciences 01 natural sciences 0105 earth and related environmental sciences |
Zdroj: | Computer Vision – ECCV 2020 ISBN: 9783030586034 ECCV (28) |
DOI: | 10.1007/978-3-030-58604-1_18 |
Popis: | We propose an end-to-end network for image generation from given structured-text that consists of the visual-relation layout module and stacking-GANs. Our visual-relation layout module uses relations among entities in the structured-text in two ways: comprehensive usage and individual usage. We comprehensively use all relations together to localize initial bounding-boxes (BBs) of all the entities. We use individual relation separately to predict from the initial BBs relation-units for all the relations. We then unify all the relation-units to produce the visual-relation layout, i.e., BBs for all the entities so that each of them uniquely corresponds to each entity while keeping its involved relations. Our visual-relation layout reflects the scene structure given in the input text. The stacking-GANs is the stack of three GANs conditioned on the visual-relation layout and the output of previous GAN, consistently capturing the scene structure. Our network realistically renders entities’ details while keeping the scene structure. Experimental results on two public datasets show the effectiveness of our method. |
Databáze: | OpenAIRE |
Externí odkaz: |