Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Proenca, Pedro"'
We propose TRADE for robust tracking and 3D localization of a moving target in cluttered environments, from UAVs equipped with a single camera. Ultimately TRADE enables 3d-aware target following. Tracking-by-detection approaches are vulnerable to tar
Externí odkaz:
http://arxiv.org/abs/2210.03270
The next generation of Mars rotorcrafts requires on-board autonomous hazard avoidance landing. To this end, this work proposes a system that performs continuous multi-resolution height map reconstruction and safe landing spot detection. Structure-fro
Externí odkaz:
http://arxiv.org/abs/2205.03522
Autor:
Schoppmann, Pascal, Proença, Pedro F., Delaune, Jeff, Pantic, Michael, Hinzmann, Timo, Matthies, Larry, Siegwart, Roland, Brockers, Roland
In this paper, we propose a resource-efficient approach to provide an autonomous UAV with an on-board perception method to detect safe, hazard-free landing sites during flights over complex 3D terrain. We aggregate 3D measurements acquired from a seq
Externí odkaz:
http://arxiv.org/abs/2111.06271
Autor:
Proença, Pedro F, Simões, Pedro
TACO is an open image dataset for litter detection and segmentation, which is growing through crowdsourcing. Firstly, this paper describes this dataset and the tools developed to support it. Secondly, we report instance segmentation performance using
Externí odkaz:
http://arxiv.org/abs/2003.06975
Autor:
Proenca, Pedro F.
Visual odometry, the process of tracking the trajectory of a moving camera based on its captured video is a fundamental problem behind autonomous mobile robotics and augmented reality applications. Yet, despite almost 40 years of extensive research o
Externí odkaz:
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.766987
Autor:
Proenca, Pedro F., Gao, Yang
On-orbit proximity operations in space rendezvous, docking and debris removal require precise and robust 6D pose estimation under a wide range of lighting conditions and against highly textured background, i.e., the Earth. This paper investigates lev
Externí odkaz:
http://arxiv.org/abs/1907.04298
Autor:
Proença, Pedro F., Gao, Yang
This paper presents CAPE, a method to extract planes and cylinder segments from organized point clouds, which processes 640x480 depth images on a single CPU core at an average of 300 Hz, by operating on a grid of planar cells. While, compared to stat
Externí odkaz:
http://arxiv.org/abs/1803.02380
Autor:
Proença, Pedro F., Gao, Yang
Active depth cameras suffer from several limitations, which cause incomplete and noisy depth maps, and may consequently affect the performance of RGB-D Odometry. To address this issue, this paper presents a visual odometry method based on point and l
Externí odkaz:
http://arxiv.org/abs/1708.02837
Autor:
Proenca, Pedro F., Gao, Yang
This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera. Noisy depth maps are processed by a probabilistic depth fusion framework base
Externí odkaz:
http://arxiv.org/abs/1706.04034
Autor:
Proença, Pedro F., Gao, Yang
This work proposes a visual odometry method that combines points and plane primitives, extracted from a noisy depth camera. Depth measurement uncertainty is modelled and propagated through the extraction of geometric primitives to the frame-to-frame
Externí odkaz:
http://arxiv.org/abs/1705.06516