Zobrazeno 1 - 10
of 6 309
pro vyhledávání: '"Xu,Yao"'
Constantly discovering novel concepts is crucial in evolving environments. This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD), which aims to incrementally discover new classes from unlabeled data while main
Externí odkaz:
http://arxiv.org/abs/2410.06535
Large Multimodal Models (LMMs) exhibit remarkable multi-tasking ability by learning mixed datasets jointly. However, novel tasks would be encountered sequentially in dynamic world, and continually fine-tuning LMMs often leads to performance degrades.
Externí odkaz:
http://arxiv.org/abs/2410.05849
In this paper, we propose $\textbf{Ne}$ural-$\textbf{Sy}$mbolic $\textbf{C}$ollaborative $\textbf{D}$istillation ($\textbf{NesyCD}$), a novel knowledge distillation method for learning the complex reasoning abilities of Large Language Models (LLMs, e
Externí odkaz:
http://arxiv.org/abs/2409.13203
Enhancing Outlier Knowledge for Few-Shot Out-of-Distribution Detection with Extensible Local Prompts
Out-of-Distribution (OOD) detection, aiming to distinguish outliers from known categories, has gained prominence in practical scenarios. Recently, the advent of vision-language models (VLM) has heightened interest in enhancing OOD detection for VLM t
Externí odkaz:
http://arxiv.org/abs/2409.04796
With the advancement of large-scale language modeling techniques, large multimodal models combining visual encoders with large language models have demonstrated exceptional performance in various visual tasks. Most of the current large-scale multimod
Externí odkaz:
http://arxiv.org/abs/2409.01179
Adapting pre-trained models to open classes is a challenging problem in machine learning. Vision-language models fully explore the knowledge of text modality, demonstrating strong zero-shot recognition performance, which is naturally suited for vario
Externí odkaz:
http://arxiv.org/abs/2408.16486
In real-world applications, the sample distribution at the inference stage often differs from the one at the training stage, causing performance degradation of trained deep models. The research on domain generalization (DG) aims to develop robust alg
Externí odkaz:
http://arxiv.org/abs/2408.09138
Autor:
Xu, Yao, Cooperman, Gene
MPI is the de facto standard for parallel computing on a cluster of computers. Checkpointing is an important component in any strategy for software resilience and for long-running jobs that must be executed by chaining together time-bounded resource
Externí odkaz:
http://arxiv.org/abs/2408.02218
Class-incremental learning (CIL) aims to recognize new classes incrementally while maintaining the discriminability of old classes. Most existing CIL methods are exemplar-based, i.e., storing a part of old data for retraining. Without relearning old
Externí odkaz:
http://arxiv.org/abs/2407.14029
Autor:
Liao, Huanxuan, Xu, Yao, He, Shizhu, Zhang, Yuanzhe, Hao, Yanchao, Liu, Shengping, Liu, Kang, Zhao, Jun
Large language models (LLMs) have acquired the ability to solve general tasks by utilizing instruction finetuning (IFT). However, IFT still relies heavily on instance training of extensive task data, which greatly limits the adaptability of LLMs to r
Externí odkaz:
http://arxiv.org/abs/2406.12382