Zobrazeno 1 - 10
of 12
pro vyhledávání: '"Mathew Monfort"'
Autor:
SouYoung Jin, James Glass, Alexander H. Liu, Mathew Monfort, Aude Oliva, David Harwath, Rogerio Feris
Publikováno v:
CVPR
When people observe events, they are able to abstract key information and build concise summaries of what is happening. These summaries include contextual and semantic information describing the important high-level details (what, where, who and how)
Autor:
Carl Vondrick, Alex Andonian, Allen S. Lee, Mathew Monfort, Rogerio Feris, Aude Oliva, Camilo Fosco
Publikováno v:
Computer Vision – ECCV 2020 ISBN: 9783030585228
ECCV (18)
ECCV (18)
Identifying common patterns among events is a key capability for human and machine perception, as it underlies intelligent decision making. Here, we propose an approach for learning semantic relational set abstractions on videos, inspired by human le
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::4d6a726d14bba8aec0ce0feff7da1e84
https://doi.org/10.1007/978-3-030-58523-5_2
https://doi.org/10.1007/978-3-030-58523-5_2
Autor:
Rogerio Feris, Barry A. McNamara, Quanfu Fan, Bowen Pan, Alex Lascelles, Kandan Ramakrishnan, Mathew Monfort, Dan Gutfreund, Alex Andonian, Aude Oliva
Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label p
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6a2edb36df054be2e8cb4f94a3030c3e
http://arxiv.org/abs/1911.00232
http://arxiv.org/abs/1911.00232
Publikováno v:
ICCV
Objects are entities we act upon, where the functionality of an object is determined by how we interact with it. In this work we propose a Dual Attention Network model which reasons about human-object interactions. The dual-attentional framework weig
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9ddbdb96d687a4f3209d5afd233270f5
http://arxiv.org/abs/1909.04743
http://arxiv.org/abs/1909.04743
Autor:
Chris L. Baker, Yizhou Wang, Yifei Xu, Ying Nian Wu, Yibiao Zhao, Mathew Monfort, Tianyang Zhao, Wongun Choi
Publikováno v:
CVPR
Accurate prediction of others' trajectories is essential for autonomous driving. Trajectory prediction is challenging because it requires reasoning about agents' past movements, social interactions among varying numbers and kinds of agents, constrain
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b786caf2e056a3b6080395266f97c99c
Publikováno v:
Journal of Vision. 20:1447
Autor:
Sarah Adel Bargal, Lisa M. Brown, Kandan Ramakrishnan, Mathew Monfort, Bolei Zhou, Tom Yan, Aude Oliva, Carl Vondrick, Quanfu Fan, Dan Gutfreund, Alex Andonian
We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds. Modeling the spatial-audio-temporal dynamics even for actions occurring in 3
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::60de98aa0cc378059b71e4d55984d950
Publikováno v:
2018 Conference on Cognitive Computational Neuroscience.
Publikováno v:
ICRA
Robotic teleoperation from a human operator's pose demonstrations provides an intuitive and effective means of control that has been made feasible by improvements in sensor technologies in recent years. However, the imprecision of low-cost depth came
Autor:
Jonathan Komperda, G. Elisabeta Marai, Farzad Mashayek, Mathew Monfort, Brian D. Ziebart, Timothy Luciani
Publikováno v:
Mathematics and Visualization ISBN: 9783319613574
We introduce a deep learning approach for the identification of shock locations in large scale tensor field datasets. Such datasets are typically generated by turbulent combustion simulations. In this proof of concept approach, we use deep learning t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::b7ed2de7ef5cbaca7e9403e285190f44
https://doi.org/10.1007/978-3-319-61358-1_16
https://doi.org/10.1007/978-3-319-61358-1_16