Zobrazeno 1 - 10
of 521
pro vyhledávání: '"Bagautdinov AT"'
Autor:
Khirodkar, Rawal, Bagautdinov, Timur, Martinez, Julieta, Zhaoen, Su, James, Austin, Selednik, Peter, Anderson, Stuart, Saito, Shunsuke
We present Sapiens, a family of models for four fundamental human-centric vision tasks -- 2D pose estimation, body-part segmentation, depth estimation, and surface normal prediction. Our models natively support 1K high-resolution inference and are ex
Externí odkaz:
http://arxiv.org/abs/2408.12569
Autor:
Lukoianov, Artem, Borde, Haitz Sáez de Ocáriz, Greenewald, Kristjan, Guizilini, Vitor Campagnolo, Bagautdinov, Timur, Sitzmann, Vincent, Solomon, Justin
While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, we s
Externí odkaz:
http://arxiv.org/abs/2405.15891
Autor:
Ng, Evonne, Romero, Javier, Bagautdinov, Timur, Bai, Shaojie, Darrell, Trevor, Kanazawa, Angjoo, Richard, Alexander
We present a framework for generating full-bodied photorealistic avatars that gesture according to the conversational dynamics of a dyadic interaction. Given speech audio, we output multiple possibilities of gestural motion for an individual, includi
Externí odkaz:
http://arxiv.org/abs/2401.01885
Autor:
Zielonka, Wojciech, Bagautdinov, Timur, Saito, Shunsuke, Zollhöfer, Michael, Thies, Justus, Romero, Javier
We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during training, dense input images durin
Externí odkaz:
http://arxiv.org/abs/2311.08581
Autor:
Xiang, Donglai, Prada, Fabian, Cao, Zhe, Guo, Kaiwen, Wu, Chenglei, Hodgins, Jessica, Bagautdinov, Timur
Clothing is an important part of human appearance but challenging to model in photorealistic avatars. In this work we present avatars with dynamically moving loose clothing that can be faithfully driven by sparse RGB-D inputs as well as body and face
Externí odkaz:
http://arxiv.org/abs/2310.05917
High-fidelity human 3D models can now be learned directly from videos, typically by combining a template-based surface model with neural representations. However, obtaining a template surface requires expensive multi-view capture systems, laser scans
Externí odkaz:
http://arxiv.org/abs/2304.02013
Autor:
Iwase, Shun, Saito, Shunsuke, Simon, Tomas, Lombardi, Stephen, Bagautdinov, Timur, Joshi, Rohan, Prada, Fabian, Shiratori, Takaaki, Sheikh, Yaser, Saragih, Jason
We present the first neural relighting approach for rendering high-fidelity personalized hands that can be animated in real-time under novel illumination. Our approach adopts a teacher-student framework, where the teacher learns appearance under a si
Externí odkaz:
http://arxiv.org/abs/2302.04866
Publikováno v:
Инженерные технологии и системы, Vol 34, Iss 2, Pp 229-243 (2024)
Introduction. The discrete element method is the most promising method for modeling soil tillage. With the use of DEM modeling it is possible to create a digital twin for technological process of interaction of tools with soil, analyze the operation
Externí odkaz:
https://doaj.org/article/adb1f5abfd3f4c41b94c46aa0c08785f
Autor:
Remelli, Edoardo, Bagautdinov, Timur, Saito, Shunsuke, Simon, Tomas, Wu, Chenglei, Wei, Shih-En, Guo, Kaiwen, Cao, Zhe, Prada, Fabian, Saragih, Jason, Sheikh, Yaser
Publikováno v:
SIGGRAPH 2022 Conference Proceedings
Photorealistic telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance that is indistinguishable from reality. In this work, we propose an end-to-end framework that addresses two core c
Externí odkaz:
http://arxiv.org/abs/2207.09774
Autor:
Xiang, Donglai, Bagautdinov, Timur, Stuyck, Tuur, Prada, Fabian, Romero, Javier, Xu, Weipeng, Saito, Shunsuke, Guo, Jingfan, Smith, Breannan, Shiratori, Takaaki, Sheikh, Yaser, Hodgins, Jessica, Wu, Chenglei
Despite recent progress in developing animatable full-body avatars, realistic modeling of clothing - one of the core aspects of human self-expression - remains an open challenge. State-of-the-art physical simulation methods can generate realistically
Externí odkaz:
http://arxiv.org/abs/2206.15470