Sequential Image Synthesis for Human Activity Video Generation

Autor: Fahim Hasan Khan, Narges Norouzi, Jayanth Yetukuri, Akila de Silva
Rok vydání: 2019
Předmět:
Zdroj: Lecture Notes in Computer Science ISBN: 9783030272715
ICIAR (2)
Popis: In the field of computer graphics and multimedia, automatic synthesis of a new set of image sequences from another different set of image sequences for creating realistic video or animation of some human activity performed is a research challenge. Traditionally, creating such animation or similar visual media contents is done manually, which is a tedious task. Recent advancements in deep learning have made some promising progress for automating this type of media creation process. This work is motivated by the idea to synthesize a temporally coherent sequence of images (e.g., a video) of a person performing some activity by using a video or set of images of a different person performing a similar activity. To achieve that, our approach utilized the cycle-consistent adversarial network (CycleGAN). We present a new approach for learning to transfer a human activity from a source domain to a target domain without using any complicated pose detection or extraction method. Our objective in this work is to learn a mapping between two consecutive sequences of images from two domains representing two different activities and use that mapping to transfer the activity from one domain to another for synthesizing an entirely new consecutive sequence of images, which can be combined to make a video of new human activity. We also present and analyze some qualitative results generated by our method.
Databáze: OpenAIRE