Zobrazeno 1 - 10
of 37
pro vyhledávání: '"Howard Jenkins"'
Autor:
Avetisyan, Armen, Xie, Christopher, Howard-Jenkins, Henry, Yang, Tsun-Yi, Aroudj, Samir, Patra, Suvam, Zhang, Fuyang, Frost, Duncan, Holland, Luke, Orme, Campbell, Engel, Jakob, Miller, Edward, Newcombe, Richard, Balntas, Vasileios
We introduce SceneScript, a method that directly produces full scene models as a sequence of structured language commands using an autoregressive, token-based approach. Our proposed scene representation is inspired by recent successes in transformers
Externí odkaz:
http://arxiv.org/abs/2403.13064
Autor:
Engel, Jakob, Somasundaram, Kiran, Goesele, Michael, Sun, Albert, Gamino, Alexander, Turner, Andrew, Talattof, Arjang, Yuan, Arnie, Souti, Bilal, Meredith, Brighid, Peng, Cheng, Sweeney, Chris, Wilson, Cole, Barnes, Dan, DeTone, Daniel, Caruso, David, Valleroy, Derek, Ginjupalli, Dinesh, Frost, Duncan, Miller, Edward, Mueggler, Elias, Oleinik, Evgeniy, Zhang, Fan, Somasundaram, Guruprasad, Solaira, Gustavo, Lanaras, Harry, Howard-Jenkins, Henry, Tang, Huixuan, Kim, Hyo Jin, Rivera, Jaime, Luo, Ji, Dong, Jing, Straub, Julian, Bailey, Kevin, Eckenhoff, Kevin, Ma, Lingni, Pesqueira, Luis, Schwesinger, Mark, Monge, Maurizio, Yang, Nan, Charron, Nick, Raina, Nikhil, Parkhi, Omkar, Borschowa, Peter, Moulon, Pierre, Gupta, Prince, Mur-Artal, Raul, Pennington, Robbie, Kulkarni, Sachin, Miglani, Sagar, Gondi, Santosh, Solanki, Saransh, Diener, Sean, Cheng, Shangyi, Green, Simon, Saarinen, Steve, Patra, Suvam, Mourikis, Tassos, Whelan, Thomas, Singh, Tripti, Balntas, Vasileios, Baiyya, Vijay, Dreewes, Wilson, Pan, Xiaqing, Lou, Yang, Zhao, Yipu, Mansour, Yusuf, Zou, Yuyang, Lv, Zhaoyang, Wang, Zijian, Yan, Mingfei, Ren, Carl, De Nardi, Renzo, Newcombe, Richard
Egocentric, multi-modal data as available on future augmented reality (AR) devices provides unique challenges and opportunities for machine perception. These future devices will need to be all-day wearable in a socially acceptable form-factor to supp
Externí odkaz:
http://arxiv.org/abs/2308.13561
We propose Cos R-CNN, a simple exemplar-based R-CNN formulation that is designed for online few-shot object detection. That is, it is able to localise and classify novel object categories in images with few examples without fine-tuning. Cos R-CNN fra
Externí odkaz:
http://arxiv.org/abs/2307.13485
We present LaLaLoc to localise in environments without the need for prior visitation, and in a manner that is robust to large changes in scene appearance, such as a full rearrangement of furniture. Specifically, LaLaLoc performs localisation through
Externí odkaz:
http://arxiv.org/abs/2104.09169
In this paper, we tackle the task of establishing dense visual correspondences between images containing objects of the same category. This is a challenging task due to large intra-class variations and a lack of dense pixel level annotations. We prop
Externí odkaz:
http://arxiv.org/abs/2003.12059
We present FlowNet3D++, a deep scene flow estimation network. Inspired by classical methods, FlowNet3D++ incorporates geometric constraints in the form of point-to-plane distance and angular alignment between individual vectors in the flow field, int
Externí odkaz:
http://arxiv.org/abs/1912.01438
We present a novel approach which is able to explore the configuration of grouped convolutions within neural networks. Group-size Series (GroSS) decomposition is a mathematical formulation of tensor factorisation into a series of approximations of in
Externí odkaz:
http://arxiv.org/abs/1912.00673
We propose a method for room layout estimation that does not rely on the typical box approximation or Manhattan world assumption. Instead, we reformulate the geometry inference problem as an instance detection task, which we solve by directly regress
Externí odkaz:
http://arxiv.org/abs/1905.03105
Autor:
Howard-Jenkins, H
The challenge considered in this thesis is that of domesticating deep learning-based vision. Firstly, we advance the use of deep learning for indoor architectural comprehension. Secondly, we explore a taming of deep neural networks for low-cost infer
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od______1064::b622551036346a879798957645c1f575
https://ora.ox.ac.uk/objects/uuid:a70105e8-fbd6-454e-8b8f-50a062329847
https://ora.ox.ac.uk/objects/uuid:a70105e8-fbd6-454e-8b8f-50a062329847
We present LaLaLoc to localise in environments without the need for prior visitation, and in a manner that is robust to large changes in scene appearance, such as a full rearrangement of furniture. Specifically, LaLaLoc performs localisation through
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2d98e56727c372e9681356131621912e
https://ora.ox.ac.uk/objects/uuid:0ec3fefa-c8e5-40ec-8fd9-5e6f56dde5cf
https://ora.ox.ac.uk/objects/uuid:0ec3fefa-c8e5-40ec-8fd9-5e6f56dde5cf