Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Chen, Nenglun"'
Recently, many 2D pretrained foundational models have demonstrated impressive zero-shot prediction capabilities. In this work, we design a novel pipeline for zero-shot 3D part segmentation, called ZeroPS. It high-quality transfers knowledge from 2D p
Externí odkaz:
http://arxiv.org/abs/2311.14262
Autor:
Chen, Runnan, Zhu, Xinge, Chen, Nenglun, Wang, Dawei, Li, Wei, Ma, Yuexin, Yang, Ruigang, Liu, Tongliang, Wang, Wenping
Current successful methods of 3D scene perception rely on the large-scale annotated point cloud, which is tedious and expensive to acquire. In this paper, we propose Model2Scene, a novel paradigm that learns free 3D scene representation from Computer
Externí odkaz:
http://arxiv.org/abs/2309.16956
Autor:
Chen, Runnan, Liu, Youquan, Kong, Lingdong, Chen, Nenglun, Zhu, Xinge, Ma, Yuexin, Liu, Tongliang, Wang, Wenping
Vision foundation models such as Contrastive Vision-Language Pre-training (CLIP) and Segment Anything (SAM) have demonstrated impressive zero-shot performance on image classification and segmentation tasks. However, the incorporation of CLIP and SAM
Externí odkaz:
http://arxiv.org/abs/2306.03899
We investigate transductive zero-shot point cloud semantic segmentation, where the network is trained on seen objects and able to segment unseen objects. The 3D geometric elements are essential cues to imply a novel 3D object type. However, previous
Externí odkaz:
http://arxiv.org/abs/2210.09923
Autor:
Zhang, Congyi, Yang, Lei, Chen, Nenglun, Vining, Nicholas, Sheffer, Alla, Lau, Francis C. M., Wang, Guoping, Wang, Wenping
Creating 3D shapes from 2D drawings is an important problem with applications in content creation for computer animation and virtual reality. We introduce a new sketch-based system, CreatureShop, that enables amateurs to create high-quality textured
Externí odkaz:
http://arxiv.org/abs/2208.05572
We propose a method for self-supervised image representation learning under the guidance of 3D geometric consistency. Our intuition is that 3D geometric consistency priors such as smooth regions and surface discontinuities may imply consistent semant
Externí odkaz:
http://arxiv.org/abs/2203.15361
Autor:
Chen, Runnan, Zhu, Xinge, Chen, Nenglun, Wang, Dawei, Li, Wei, Ma, Yuexin, Yang, Ruigang, Wang, Wenping
Promising performance has been achieved for visual perception on the point cloud. However, the current methods typically rely on labour-extensive annotations on the scene scans. In this paper, we explore how synthetic models alleviate the real scene
Externí odkaz:
http://arxiv.org/abs/2203.10546
Autor:
Chen, Runnan, Zhou, Penghao, Wang, Wenzhe, Chen, Nenglun, Peng, Pai, Sun, Xing, Wang, Wenping
Personalized video highlight detection aims to shorten a long video to interesting moments according to a user's preference, which has recently raised the community's attention. Current methods regard the user's history as holistic information to pre
Externí odkaz:
http://arxiv.org/abs/2109.01799
Autor:
Chen, Nenglun, Pan, Xingjia, Chen, Runnan, Yang, Lei, Lin, Zhiwen, Ren, Yuqiang, Yuan, Haolei, Guo, Xiaowei, Huang, Feiyue, Wang, Wenping
We study the problem of weakly supervised grounded image captioning. That is, given an image, the goal is to automatically generate a sentence describing the context of the image with each noun word grounded to the corresponding region in the image.
Externí odkaz:
http://arxiv.org/abs/2108.01056
Autor:
Chen, Runnan, Ma, Yuexin, Chen, Nenglun, Liu, Lingjie, Cui, Zhiming, Lin, Yanhong, Wang, Wenping
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis. However, the current methods are time-consuming and suffer from large biases in landmark
Externí odkaz:
http://arxiv.org/abs/2107.09899