Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Christoph B. Rist"'
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10)
Semantic scene completion is the task of jointly estimating 3D geometry and semantics of objects and surfaces within a given extent. This is a particularly challenging task on real-world data that is sparse and occluded. We propose a scene segmentati
Publikováno v:
International Journal of Computer Vision, 130, 2962–2979
A considerable amount of research is concerned with the generation of realistic sensor data. LiDAR point clouds are generated by complex simulations or learned generative models. The generated data is usually exploited to enable or improve downstream
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d03e2f56aa660886a8b4915c406c3580
https://publikationen.bibliothek.kit.edu/1000151642/149457063
https://publikationen.bibliothek.kit.edu/1000151642/149457063
Publikováno v:
2020 IEEE Intelligent Vehicles Symposium (IV).
This work proposes a spatially-conditioned neural network to perform semantic segmentation and geometric scene completion in 3D on real-world LiDAR data. Spatially-conditioned scene segmentation (SCSSnet) is a representation suitable to encode proper
Publikováno v:
IV
Autonomous vehicles need to have a semantic understanding of the three-dimensional world around them in order to reason about their environment. State of the art methods use deep neural networks to predict semantic classes for each point in a LiDAR s
This paper presents a novel CNN-based approach for synthesizing high-resolution LiDAR point cloud data. Our approach generates semantically and perceptually realistic results with guidance from specialized loss-functions. First, we utilize a modified
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3971f1e3a7a3589d526af7b019896dd0
http://arxiv.org/abs/1907.00787
http://arxiv.org/abs/1907.00787
Publikováno v:
Proceedings IEEE Symposium Intelligent Vehicles (IV 2019)
A considerable amount of annotated training data is necessary to achieve state-of-the-art performance in perception tasks using point clouds. Unlike RGB-images, LiDAR point clouds captured with different sensors or varied mounting positions exhibit a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::29257e2530579836f45d1b0a53c171b9
https://doi.org/10.1109/ivs.2019.8814047
https://doi.org/10.1109/ivs.2019.8814047
Publikováno v:
ITSC
Making Convolutional Neural Networks (CNNs) successful in learning problems like image based ego motion estimation, highly depends on the ability of the network to extract the temporal information from videos. Therefore, the architecture of a network