Zobrazeno 1 - 10
of 1 420
pro vyhledávání: '"Huang, Jiaxing"'
Autor:
Guo, Heng, Zhang, Jianfeng, Huang, Jiaxing, Mok, Tony C. W., Guo, Dazhou, Yan, Ke, Lu, Le, Jin, Dakai, Xu, Minfeng
Segment anything model (SAM) demonstrates strong generalization ability on natural image segmentation. However, its direct adaption in medical image segmentation tasks shows significant performance drops with inferior accuracy and unstable results. I
Externí odkaz:
http://arxiv.org/abs/2403.15063
Inspired by the success of general-purpose models in NLP, recent studies attempt to unify different vision tasks in the same sequence format and employ autoregressive Transformers for sequence prediction. They apply uni-directional attention to captu
Externí odkaz:
http://arxiv.org/abs/2403.07692
Inspired by the outstanding zero-shot capability of vision language models (VLMs) in image classification tasks, open-vocabulary object detection has attracted increasing interest by distilling the broad VLM knowledge into detector training. However,
Externí odkaz:
http://arxiv.org/abs/2402.04630
Large-vocabulary object detectors (LVDs) aim to detect objects of many categories, which learn super objectness features and can locate objects accurately while applied to various downstream data. However, LVDs often struggle in recognizing the locat
Externí odkaz:
http://arxiv.org/abs/2401.06969
Segment Anything Models (SAMs) like SEEM and SAM have demonstrated great potential in learning to segment anything. The core design of SAMs lies with Promptable Segmentation, which takes a handcrafted prompt as input and returns the expected segmenta
Externí odkaz:
http://arxiv.org/abs/2401.04651
Traditional computer vision generally solves each single task independently by a dedicated model with the task instruction implicitly designed in the model architecture, arising two limitations: (1) it leads to task-specific models, which require mul
Externí odkaz:
http://arxiv.org/abs/2312.16602
Autor:
Huang, Jiaxing, Zheng, Jian-Hua
We first establish any continuum without interiors can be a limit set of iterations of an entire function on an oscillating wandering domain, and hence arise as a component of Julia sets. Recently, Luka Boc Thaler showed that every bounded connected
Externí odkaz:
http://arxiv.org/abs/2309.04396
Domain generalization (DG) aims to learn domain-generalizable models from one or multiple source domains that can perform well in unseen target domains. Despite its recent progress, most existing work suffers from the misalignment between the difficu
Externí odkaz:
http://arxiv.org/abs/2309.00844
Black-box unsupervised domain adaptation (UDA) learns with source predictions of target data without accessing either source data or source models during training, and it has clear superiority in data privacy and flexibility in target network selecti
Externí odkaz:
http://arxiv.org/abs/2308.13236
Traditional domain adaptation assumes the same vocabulary across source and target domains, which often struggles with limited transfer flexibility and efficiency while handling target domains with different vocabularies. Inspired by recent vision-la
Externí odkaz:
http://arxiv.org/abs/2306.16658