Zobrazeno 1 - 10
of 46
pro vyhledávání: '"Boukhayma Adnane"'
Autor:
Baert, Kelian, Bharadwaj, Shrisha, Castan, Fabien, Maujean, Benoit, Christie, Marc, Abrevaya, Victoria, Boukhayma, Adnane
Publikováno v:
SIGGRAPH Asia 2024 Conference Papers (SA Conference Papers '24), December 3-6, 2024, Tokyo, Japan
Feedforward monocular face capture methods seek to reconstruct posed faces from a single image of a person. Current state of the art approaches have the ability to regress parametric 3D face models in real-time across a wide range of identities, ligh
Externí odkaz:
http://arxiv.org/abs/2409.07984
Autor:
Ouasfi, Amine, Boukhayma, Adnane
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities, encompassing a wide range from 3D shapes to images and audio. Within the realm of 3D shape representation, Neural Signed Distance Fu
Externí odkaz:
http://arxiv.org/abs/2408.15114
This paper presents a novel approach for sparse 3D reconstruction by leveraging the expressive power of Neural Radiance Fields (NeRFs) and fast transfer of their features to learn accurate occupancy fields. Existing 3D reconstruction methods from spa
Externí odkaz:
http://arxiv.org/abs/2408.14724
We present a novel approach for recovering 3D shape and view dependent appearance from a few colored images, enabling efficient 3D reconstruction and novel view synthesis. Our method learns an implicit neural representation in the form of a Signed Di
Externí odkaz:
http://arxiv.org/abs/2407.14257
Autor:
Ouasfi, Amine, Boukhayma, Adnane
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities, encompassing a wide range from 3D shapes to images and audio. Within the realm of 3D shape representation, Neural Signed Distance Fu
Externí odkaz:
http://arxiv.org/abs/2404.02759
Autor:
Ouasfi, Amine, Boukhayma, Adnane
Feedforward generalizable models for implicit shape reconstruction from unoriented point cloud present multiple advantages, including high performance and inference speed. However, they still suffer from generalization issues, ranging from underfitti
Externí odkaz:
http://arxiv.org/abs/2311.12967
Autor:
Ouasfi, Amine, Boukhayma, Adnane
While current state-of-the-art generalizable implicit neural shape models rely on the inductive bias of convolutions, it is still not entirely clear how properties emerging from such biases are compatible with the task of 3D reconstruction from point
Externí odkaz:
http://arxiv.org/abs/2311.12125
We revisit NPBG, the popular approach to novel view synthesis that introduced the ubiquitous point feature neural rendering paradigm. We are interested in particular in data-efficient learning with fast view synthesis. We achieve this through a view-
Externí odkaz:
http://arxiv.org/abs/2208.05785
We explore a new strategy for few-shot novel view synthesis based on a neural light field representation. Given a target camera pose, an implicit neural network maps each ray to its target pixel's color directly. The network is conditioned on local r
Externí odkaz:
http://arxiv.org/abs/2207.11757
Autor:
Ouasfi, Amine, Boukhayma, Adnane
We explore a new idea for learning based shape reconstruction from a point cloud, based on the recently popularized implicit neural shape representations. We cast the problem as a few-shot learning of implicit neural signed distance functions in feat
Externí odkaz:
http://arxiv.org/abs/2207.04161