Lossless Coding of Point Cloud Geometry using a Deep Generative Model
Autor: | Dat Thanh Nguyen, Giuseppe Valenzise, Pierre Duhamel, Maurice Quach |
---|---|
Přispěvatelé: | Laboratoire des signaux et systèmes (L2S), CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS) |
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer science Computer Vision and Pattern Recognition (cs.CV) Point cloud Computer Science - Computer Vision and Pattern Recognition Point Cloud Coding 02 engineering and technology Machine Learning (cs.LG) Octree Deep Learning [INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing 0202 electrical engineering electronic engineering information engineering Media Technology FOS: Electrical engineering electronic engineering information engineering Electrical and Electronic Engineering Block (data storage) Lossless compression Context model context model Image and Video Processing (eess.IV) Electrical Engineering and Systems Science - Image and Video Processing G-PCC Arithmetic coding arithmetic coding Generative model Probability distribution 020201 artificial intelligence & image processing Algorithm |
Zdroj: | IEEE Transactions on Circuits and Systems for Video Technology IEEE Transactions on Circuits and Systems for Video Technology, Institute of Electrical and Electronics Engineers, 2021, 31 (12), pp.4617-4629. ⟨10.1109/TCSVT.2021.3100279⟩ |
ISSN: | 1051-8215 |
DOI: | 10.1109/TCSVT.2021.3100279⟩ |
Popis: | This paper proposes a lossless point cloud (PC) geometry compression method that uses neural networks to estimate the probability distribution of voxel occupancy. First, to take into account the PC sparsity, our method adaptively partitions a point cloud into multiple voxel block sizes. This partitioning is signalled via an octree. Second, we employ a deep auto-regressive generative model to estimate the occupancy probability of each voxel given the previously encoded ones. We then employ the estimated probabilities to code efficiently a block using a context-based arithmetic coder. Our context has variable size and can expand beyond the current block to learn more accurate probabilities. We also consider using data augmentation techniques to increase the generalization capability of the learned probability models, in particular in the presence of noise and lower-density point clouds. Experimental evaluation, performed on a variety of point clouds from four different datasets and with diverse characteristics, demonstrates that our method reduces significantly (by up to 30%) the rate for lossless coding compared to the state-of-the-art MPEG codec. This paper has been submitted to the IEEE Transactions on Circuits and Systems for Video Technology (TCSVT). arXiv admin note: text overlap with arXiv:2011.14700 |
Databáze: | OpenAIRE |
Externí odkaz: |