Zobrazeno 1 - 10
of 77
pro vyhledávání: '"Skorokhodov, Ivan"'
Autor:
Skorokhodov, Ivan
Deep generative models are deep learning-based methods that are optimized to synthesize samples of a given distribution. During the past years, they have attracted a lot of interest from the research community, and the developed tools now enjoy many
Externí odkaz:
http://hdl.handle.net/10754/690419
Autor:
Bahmani, Sherwin, Skorokhodov, Ivan, Qian, Guocheng, Siarohin, Aliaksandr, Menapace, Willi, Tagliasacchi, Andrea, Lindell, David B., Tulyakov, Sergey
Numerous works have recently integrated 3D camera control into foundational text-to-video models, but the resulting camera control is often imprecise, and video generation quality suffers. In this work, we analyze camera motion from a first principle
Externí odkaz:
http://arxiv.org/abs/2411.18673
Autor:
Bahmani, Sherwin, Skorokhodov, Ivan, Siarohin, Aliaksandr, Menapace, Willi, Qian, Guocheng, Vasilkovsky, Michael, Lee, Hsin-Ying, Wang, Chaoyang, Zou, Jiaxu, Tagliasacchi, Andrea, Lindell, David B., Tulyakov, Sergey
Modern text-to-video synthesis models demonstrate coherent, photorealistic generation of complex videos from a text description. However, most existing models lack fine-grained control over camera movement, which is critical for downstream applicatio
Externí odkaz:
http://arxiv.org/abs/2407.12781
Autor:
Fang, Yuwei, Menapace, Willi, Siarohin, Aliaksandr, Chen, Tsai-Shien, Wang, Kuan-Chien, Skorokhodov, Ivan, Neubig, Graham, Tulyakov, Sergey
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a lack of visual grounding and restricting their ver
Externí odkaz:
http://arxiv.org/abs/2407.06304
Autor:
Gu, Jing, Fang, Yuwei, Skorokhodov, Ivan, Wonka, Peter, Du, Xinya, Tulyakov, Sergey, Wang, Xin Eric
Video editing is a cornerstone of digital media, from entertainment and education to professional communication. However, previous methods often overlook the necessity of comprehensively understanding both global and local contexts, leading to inaccu
Externí odkaz:
http://arxiv.org/abs/2406.12831
Diffusion models have demonstrated remarkable performance in image and video synthesis. However, scaling them to high-resolution inputs is challenging and requires restructuring the diffusion pipeline into multiple independent components, limiting sc
Externí odkaz:
http://arxiv.org/abs/2406.07792
Autor:
Zhang, Zhixing, Li, Yanyu, Wu, Yushu, Xu, Yanwu, Kag, Anil, Skorokhodov, Ivan, Menapace, Willi, Siarohin, Aliaksandr, Cao, Junli, Metaxas, Dimitris, Tulyakov, Sergey, Ren, Jian
Diffusion-based video generation models have demonstrated remarkable success in obtaining high-fidelity videos through the iterative denoising process. However, these models require multiple denoising steps during sampling, resulting in high computat
Externí odkaz:
http://arxiv.org/abs/2406.04324
Autor:
Bahmani, Sherwin, Liu, Xian, Yifan, Wang, Skorokhodov, Ivan, Rong, Victor, Liu, Ziwei, Liu, Xihui, Park, Jeong Joon, Tulyakov, Sergey, Wetzstein, Gordon, Tagliasacchi, Andrea, Lindell, David B.
Recent techniques for text-to-4D generation synthesize dynamic 3D scenes using supervision from pre-trained text-to-video models. However, existing representations for motion, such as deformation models or time-dependent neural representations, are l
Externí odkaz:
http://arxiv.org/abs/2403.17920
Autor:
Menapace, Willi, Siarohin, Aliaksandr, Skorokhodov, Ivan, Deyneka, Ekaterina, Chen, Tsai-Shien, Kag, Anil, Fang, Yuwei, Stoliar, Aleksei, Ricci, Elisa, Ren, Jian, Tulyakov, Sergey
Contemporary models for generating images show remarkable quality and versatility. Swayed by these advantages, the research community repurposes them to generate videos. Since video content is highly redundant, we argue that naively bringing advances
Externí odkaz:
http://arxiv.org/abs/2402.14797
Autor:
Qian, Guocheng, Cao, Junli, Siarohin, Aliaksandr, Kant, Yash, Wang, Chaoyang, Vasilkovsky, Michael, Lee, Hsin-Ying, Fang, Yuwei, Skorokhodov, Ivan, Zhuang, Peiye, Gilitschenski, Igor, Ren, Jian, Ghanem, Bernard, Aberman, Kfir, Tulyakov, Sergey
We introduce Amortized Text-to-Mesh (AToM), a feed-forward text-to-mesh framework optimized across multiple text prompts simultaneously. In contrast to existing text-to-3D methods that often entail time-consuming per-prompt optimization and commonly
Externí odkaz:
http://arxiv.org/abs/2402.00867