Zobrazeno 1 - 10
of 49
pro vyhledávání: '"Olszewski, Kyle"'
Autor:
Ntavelis, Evangelos, Siarohin, Aliaksandr, Olszewski, Kyle, Wang, Chaoyang, Van Gool, Luc, Tulyakov, Sergey
We present a novel approach to the generation of static and articulated 3D assets that has a 3D autodecoder at its core. The 3D autodecoder framework embeds properties learned from the target dataset in the latent space, which can then be decoded int
Externí odkaz:
http://arxiv.org/abs/2307.05445
Autor:
Siarohin, Aliaksandr, Menapace, Willi, Skorokhodov, Ivan, Olszewski, Kyle, Ren, Jian, Lee, Hsin-Ying, Chai, Menglei, Tulyakov, Sergey
We propose a novel approach for unsupervised 3D animation of non-rigid deformable objects. Our method learns the 3D structure and dynamics of objects solely from single-view RGB videos, and can decompose them into semantically meaningful parts that c
Externí odkaz:
http://arxiv.org/abs/2301.11326
The two popular datasets ScanRefer [16] and ReferIt3D [3] connect natural language to real-world 3D data. In this paper, we curate a large-scale and complementary dataset extending both the aforementioned ones by associating all objects mentioned in
Externí odkaz:
http://arxiv.org/abs/2212.06250
Autor:
Cheng, Zezhou, Chai, Menglei, Ren, Jian, Lee, Hsin-Ying, Olszewski, Kyle, Huang, Zeng, Maji, Subhransu, Tulyakov, Sergey
Creating and editing the shape and color of 3D objects require tremendous human effort and expertise. Compared to direct manipulation in 3D interfaces, 2D interactions such as sketches and scribbles are usually much more natural and intuitive for the
Externí odkaz:
http://arxiv.org/abs/2207.11795
Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between
Externí odkaz:
http://arxiv.org/abs/2206.07771
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every p
Externí odkaz:
http://arxiv.org/abs/2204.10850
Autor:
Zhu, Ye, Olszewski, Kyle, Wu, Yu, Achlioptas, Panos, Chai, Menglei, Yan, Yan, Tulyakov, Sergey
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates complex musical samples conditioned on dance videos. Our proposed framework takes dance video frames and human body motions as input, and learns to generat
Externí odkaz:
http://arxiv.org/abs/2204.00604
Autor:
Wang, Huan, Ren, Jian, Huang, Zeng, Olszewski, Kyle, Chai, Menglei, Fu, Yun, Tulyakov, Sergey
Recent research explosion on Neural Radiance Field (NeRF) shows the encouraging potential to represent complex scenes with neural networks. One major drawback of NeRF is its prohibitive inference time: Rendering a single pixel requires querying the N
Externí odkaz:
http://arxiv.org/abs/2203.17261
Autor:
Han, Ligong, Ren, Jian, Lee, Hsin-Ying, Barbieri, Francesco, Olszewski, Kyle, Minaee, Shervin, Metaxas, Dimitris, Tulyakov, Sergey
Most methods for conditional video synthesis use a single modality as the condition. This comes with major limitations. For example, it is problematic for a model conditioned on an image to generate a specific motion trajectory desired by the user si
Externí odkaz:
http://arxiv.org/abs/2203.02573
Autor:
Kuang, Zhengfei, Olszewski, Kyle, Chai, Menglei, Huang, Zeng, Achlioptas, Panos, Tulyakov, Sergey
We present a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects from photographs with varying cameras, illumination, and backgrounds. This enables
Externí odkaz:
http://arxiv.org/abs/2201.02533