Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Liangzhe Yuan"'
Autor:
Hartwig Adam, Florian Schroff, Liangzhe Yuan, Liang-Chieh Chen, Jiaping Zhao, Ting Liu, Long Zhao, Yuxiao Wang, Jennifer J. Sun
Publikováno v:
International Journal of Computer Vision. 130:111-135
Recognition of human poses and actions is crucial for autonomous systems to interact smoothly with people. However, cameras generally capture human poses in 2D as images and videos, which can have significant appearance variations across viewpoints t
Publikováno v:
IEEE Robotics and Automation Letters. 4:1343-1350
In this letter, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new,
Autor:
Jiaping Zhao, Dimitris N. Metaxas, Liangzhe Yuan, Long Zhao, Ting Liu, Florian Schroff, Xi Peng, Yuxiao Wang, Hartwig Adam, Jennifer J. Sun
Publikováno v:
CVPR
We introduce a novel representation learning method to disentangle pose-dependent as well as view-dependent factors from 2D human poses. The method trains a network using cross-view mutual information maximization (CV-MIM) which maximizes mutual info
Autor:
Dan Kondratyuk, Mingxing Tan, Liangzhe Yuan, Yandong Li, Boqing Gong, Matthew Brown, Li Zhang
Publikováno v:
CVPR
We present Mobile Video Networks (MoViNets), a family of computation and memory efficient video networks that can operate on streaming video for online inference. 3D convolutional neural networks (CNNs) are accurate at video recognition but require l
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7b17ad5b9b858532a2f6731c129da0f3
Publikováno v:
CVPR Workshops
We propose a demo of our work, Unsupervised Event-based Learning of Optical Flow, Depth and Egomotion, which will also appear at CVPR 2019. Our demo consists of a CNN which takes as input events from a DAVIS-346b event camera, represented as a discre
Publikováno v:
CVPR
In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::16c2f934e4cdf5c0a08aa0cf655264f0
http://arxiv.org/abs/1812.08156
http://arxiv.org/abs/1812.08156
Publikováno v:
CVPR
We propose a light-weight video frame interpolation algorithm. Our key innovation is an instance-level supervision that allows information to be learned from the high-resolution version of similar objects. Our experiment shows that the proposed metho
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::f360a532fd0fed066aa9710d1176666c
http://arxiv.org/abs/1812.01210
http://arxiv.org/abs/1812.01210
Publikováno v:
Robotics: Science and Systems
Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand cra