Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Shugao Ma"'
Autor:
Stanislav Pidhorskyi, Timur Bagautdinov, Shugao Ma, Jason Saragih, Gabriel Schwartz, Yaser Sheikh, Tomas Simon
Publikováno v:
ACM Transactions on Graphics. 41:1-18
Cameras with a finite aperture diameter exhibit defocus for scene elements that are not at the focus distance, and have only a limited depth of field within which objects appear acceptably sharp. In this work we address the problem of applying invers
Autor:
Jason Saragih, Dawei Wang, Yuecheng Li, Tomas Simon, Fernando De la Torre, Yaser Sheikh, Shugao Ma
Publikováno v:
CVPR
Telecommunication with photorealistic avatars in virtual or augmented reality is a promising path for achieving authentic face-to-face communication in 3D over remote physical distances. In this work, we present the Pixel Codec Avatars (PiCA): a deep
Publikováno v:
DAC
Creating virtual avatars with realistic rendering is one of the most essential and challenging tasks to provide highly immersive virtual reality (VR) experiences. It requires not only sophisticated deep neural network (DNN) based codec avatar decoder
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::64a374565772a965f35cc2617ef3d7c2
http://arxiv.org/abs/2103.04958
http://arxiv.org/abs/2103.04958
Publikováno v:
Computer Vision – ECCV 2020 ISBN: 9783030586096
ECCV (12)
ECCV (12)
VR telepresence consists of interacting with another human in a virtual space represented by an avatar. Today most avatars are cartoon-like, but soon the technology will allow video-realistic ones. This paper aims in this direction, and presents Modu
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::d101ec4aa597519f44142b26dc163fb2
https://doi.org/10.1007/978-3-030-58610-2_20
https://doi.org/10.1007/978-3-030-58610-2_20
Publikováno v:
WACV
Codec Avatars are a recent class of learned, photorealistic face models that accurately represent the geometry and texture of a person in 3D (i.e., for virtual reality), and are almost indistinguishable from video [28]. In this paper we describe the
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d4289dcf67d8ab826a42ac2432121efb
Autor:
Xiaohui Shen, Mehrnoosh Sameki, Radomir Mech, Stan Sclaroff, Zhe Lin, Brian Price, Shugao Ma, Jianming Zhang, Margrit Betke
Publikováno v:
CVPR
We study the problem of salient object subitizing, i.e. predicting the existence and the number of salient objects in an image using holistic cues. This task is inspired by the ability of people to quickly and accurately identify the number of items
Publikováno v:
International Journal of Computer Vision. 126:314-332
Human actions are, inherently, structured patterns of body movements. We explore ensembles of hierarchical spatio-temporal trees, discovered directly from training data, to model these structures for action recognition and spatial localization. Disco
Publikováno v:
ICCV
We present a 16.2-million frame (50-hour) multimodal dataset of two-person face-to-face spontaneous conversations. Our dataset features synchronized body and finger motion as well as audio data. To the best of our knowledge, it represents the largest
Publikováno v:
ICMI
Non verbal behaviours such as gestures, facial expressions, body posture, and para-linguistic cues have been shown to complement or clarify verbal messages. Hence to improve telepresence, in form of an avatar, it is important to model these behaviour
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9a77abd7b4ef8f7a45587989b8aff11c
Publikováno v:
Computer Vision – ECCV 2018 ISBN: 9783030012274
ECCV (5)
ECCV (5)
We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Col
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::2d5752d36fab0de0f3f255d948138300
https://doi.org/10.1007/978-3-030-01228-1_8
https://doi.org/10.1007/978-3-030-01228-1_8