Representation-Agnostic Shape Fields
Autor: | Huang, Xiaoyang, Yang, Jiancheng, Wang, Yanjun, Chen, Ziyu, Li, Linguo, Li, Teng, Ni, Bingbing, Zhang, Wenjun |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | published in the Tenth International Conference on Learning Representations (ICLR 2022) |
Druh dokumentu: | Working Paper |
Popis: | 3D shape analysis has been widely explored in the era of deep learning. Numerous models have been developed for various 3D data representation formats, e.g., MeshCNN for meshes, PointNet for point clouds and VoxNet for voxels. In this study, we present Representation-Agnostic Shape Fields (RASF), a generalizable and computation-efficient shape embedding module for 3D deep learning. RASF is implemented with a learnable 3D grid with multiple channels to store local geometry. Based on RASF, shape embeddings for various 3D shape representations (point clouds, meshes and voxels) are retrieved by coordinate indexing. While there are multiple ways to optimize the learnable parameters of RASF, we provide two effective schemes among all in this paper for RASF pre-training: shape reconstruction and normal estimation. Once trained, RASF becomes a plug-and-play performance booster with negligible cost. Extensive experiments on diverse 3D representation formats, networks and applications, validate the universal effectiveness of the proposed RASF. Code and pre-trained models are publicly available https://github.com/seanywang0408/RASF Comment: The Tenth International Conference on Learning Representations (ICLR 2022). Code is available at https://github.com/seanywang0408/RASF |
Databáze: | arXiv |
Externí odkaz: |