Zobrazeno 1 - 10
of 1 639
pro vyhledávání: '"A. Boulch"'
Publikováno v:
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol V-2-2022, Pp 415-421 (2022)
It is of interest to localize a ground-based LiDAR point cloud on remote sensing imagery. In this work, we tackle a subtask of this problem, i.e. to map a digital elevation model (DEM) rasterized from aerial LiDAR point cloud on the aerial imagery. W
Externí odkaz:
https://doaj.org/article/2c7a76fe1d8c418496edb42b4625cca5
Online object segmentation and tracking in Lidar point clouds enables autonomous agents to understand their surroundings and make safe decisions. Unfortunately, manual annotations for these tasks are prohibitively costly. We tackle this problem with
Externí odkaz:
http://arxiv.org/abs/2409.07887
Machine learning based autonomous driving systems often face challenges with safety-critical scenarios that are rare in real-world data, hindering their large-scale deployment. While increasing real-world training data coverage could address this iss
Externí odkaz:
http://arxiv.org/abs/2409.07830
Autor:
Michele, Björn, Boulch, Alexandre, Vu, Tuan-Hung, Puy, Gilles, Marlet, Renaud, Courty, Nicolas
We tackle the challenging problem of source-free unsupervised domain adaptation (SFUDA) for 3D semantic segmentation. It amounts to performing domain adaptation on an unlabeled target domain without any access to source data; the available informatio
Externí odkaz:
http://arxiv.org/abs/2409.04409
Autor:
Xu, Yihong, Zablocki, Éloi, Boulch, Alexandre, Puy, Gilles, Chen, Mickael, Bartoccioni, Florent, Samet, Nermin, Siméoni, Oriane, Gidaris, Spyros, Vu, Tuan-Hung, Bursuc, Andrei, Valle, Eduardo, Marlet, Renaud, Cord, Matthieu
Motion forecasting is crucial in autonomous driving systems to anticipate the future trajectories of surrounding agents such as pedestrians, vehicles, and traffic signals. In end-to-end forecasting, the model must jointly detect and track from sensor
Externí odkaz:
http://arxiv.org/abs/2406.08113
Autor:
Sirko-Galouchenko, Sophia, Boulch, Alexandre, Gidaris, Spyros, Bursuc, Andrei, Vobecky, Antonin, Pérez, Patrick, Marlet, Renaud
We introduce a self-supervised pretraining method, called OccFeat, for camera-only Bird's-Eye-View (BEV) segmentation networks. With OccFeat, we pretrain a BEV network via occupancy prediction and feature distillation tasks. Occupancy prediction prov
Externí odkaz:
http://arxiv.org/abs/2404.14027
Autor:
Puy, Gilles, Gidaris, Spyros, Boulch, Alexandre, Siméoni, Oriane, Sautier, Corentin, Pérez, Patrick, Bursuc, Andrei, Marlet, Renaud
Self-supervised image backbones can be used to address complex 2D tasks (e.g., semantic segmentation, object discovery) very efficiently and with little or no downstream supervision. Ideally, 3D backbones for lidar should be able to inherit these pro
Externí odkaz:
http://arxiv.org/abs/2310.17504
We present a surprisingly simple and efficient method for self-supervision of 3D backbone on automotive Lidar point clouds. We design a contrastive loss between features of Lidar scans captured in the same scene. Several such approaches have been pro
Externí odkaz:
http://arxiv.org/abs/2310.17281
Autor:
Michele, Björn, Boulch, Alexandre, Puy, Gilles, Vu, Tuan-Hung, Marlet, Renaud, Courty, Nicolas
Publikováno v:
2024 International Conference on 3D Vision (3DV), Davos, Switzerland, 2024 pp. 421-431
Learning models on one labeled dataset that generalize well on another domain is a difficult task, as several shifts might happen between the data domains. This is notably the case for lidar data, for which models can exhibit large performance discre
Externí odkaz:
http://arxiv.org/abs/2304.03251
Autor:
Ando, Angelika, Gidaris, Spyros, Bursuc, Andrei, Puy, Gilles, Boulch, Alexandre, Marlet, Renaud
Casting semantic segmentation of outdoor LiDAR point clouds as a 2D problem, e.g., via range projection, is an effective and popular approach. These projection-based methods usually benefit from fast computations and, when combined with techniques wh
Externí odkaz:
http://arxiv.org/abs/2301.10222