Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Jiang, Chiyu Max"'
Autor:
Xiao, Zihao, Jing, Longlong, Wu, Shangxuan, Zhu, Alex Zihao, Ji, Jingwei, Jiang, Chiyu Max, Hung, Wei-Chih, Funkhouser, Thomas, Kuo, Weicheng, Angelova, Anelia, Zhou, Yin, Sheng, Shiwei
3D panoptic segmentation is a challenging perception task, especially in autonomous driving. It aims to predict both semantic and instance annotations for 3D points in a scene. Although prior 3D panoptic segmentation approaches have achieved great pe
Externí odkaz:
http://arxiv.org/abs/2401.02402
We present MotionDiffuser, a diffusion based representation for the joint distribution of future trajectories over multiple agents. Such representation has several key advantages: first, our model learns a highly multimodal distribution that captures
Externí odkaz:
http://arxiv.org/abs/2306.03083
Autor:
Deng, Congyue, Jiang, Chiyu "Max'', Qi, Charles R., Yan, Xinchen, Zhou, Yin, Guibas, Leonidas, Anguelov, Dragomir
2D-to-3D reconstruction is an ill-posed problem, yet humans are good at solving this problem due to their prior knowledge of the 3D world developed over years. Driven by this observation, we propose NeRDi, a single-view NeRF synthesis framework with
Externí odkaz:
http://arxiv.org/abs/2212.03267
Autor:
Peng, Songyou, Genova, Kyle, Jiang, Chiyu "Max", Tagliasacchi, Andrea, Pollefeys, Marc, Funkhouser, Thomas
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision. We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedde
Externí odkaz:
http://arxiv.org/abs/2211.15654
Continued improvements in deep learning architectures have steadily advanced the overall performance of 3D object detectors to levels on par with humans for certain tasks and datasets, where the overall performance is mostly driven by common examples
Externí odkaz:
http://arxiv.org/abs/2210.08375
Autor:
Peng, Songyou, Jiang, Chiyu "Max", Liao, Yiyi, Niemeyer, Michael, Pollefeys, Marc, Geiger, Andreas
In recent years, neural implicit representations gained popularity in 3D reconstruction due to their expressiveness and flexibility. However, the implicit nature of neural implicit representations results in slow inference time and requires careful i
Externí odkaz:
http://arxiv.org/abs/2106.03452
Autor:
Jiang, Chiyu "Max", Huang, Jingwei, Tagliasacchi, Andrea, Guibas, Leonidas
We present ShapeFlow, a flow-based model for learning a deformation space for entire classes of 3D shapes with large intra-class variations. ShapeFlow allows learning a multi-template deformation space that is agnostic to shape topology, yet preserve
Externí odkaz:
http://arxiv.org/abs/2006.07982
We present MeshODE, a scalable and robust framework for pairwise CAD model deformation without prespecified correspondences. Given a pair of shapes, our framework provides a novel shape feature-preserving mapping function that continuously deforms on
Externí odkaz:
http://arxiv.org/abs/2005.11617
Autor:
Jiang, Chiyu Max, Esmaeilzadeh, Soheil, Azizzadenesheli, Kamyar, Kashinath, Karthik, Mustafa, Mustafa, Tchelepi, Hamdi A., Marcus, Philip, Prabhat, Anandkumar, Anima
We propose MeshfreeFlowNet, a novel deep learning-based super-resolution framework to generate continuous (grid-free) spatio-temporal solutions from the low-resolution inputs. While being computationally efficient, MeshfreeFlowNet accurately recovers
Externí odkaz:
http://arxiv.org/abs/2005.01463
Autor:
Jiang, Chiyu Max, Sud, Avneesh, Makadia, Ameesh, Huang, Jingwei, Nießner, Matthias, Funkhouser, Thomas
Shape priors learned from data are commonly used to reconstruct 3D objects from partial or noisy data. Yet no such shape priors are available for indoor scenes, since typical 3D autoencoders cannot handle their scale, complexity, or diversity. In thi
Externí odkaz:
http://arxiv.org/abs/2003.08981