Zobrazeno 1 - 10
of 2 873
pro vyhledávání: '"Guo, Lan"'
Vision-language models (VLMs) like CLIP have demonstrated impressive zero-shot ability in image classification tasks by aligning text and images but suffer inferior performance compared with task-specific expert models. On the contrary, expert models
Externí odkaz:
http://arxiv.org/abs/2408.11449
In offline Imitation Learning (IL), one of the main challenges is the \textit{covariate shift} between the expert observations and the actual distribution encountered by the agent, because it is difficult to determine what action an agent should take
Externí odkaz:
http://arxiv.org/abs/2406.12550
Large language models (LLMs) have demonstrated impressive performance on reasoning tasks, which can be further improved through few-shot prompting techniques. However, the current evaluation primarily focuses on carefully constructed benchmarks and n
Externí odkaz:
http://arxiv.org/abs/2406.05055
Autor:
Zhou, Zhi, Shi, Jiang-Xin, Song, Peng-Xiao, Yang, Xiao-Wen, Jin, Yi-Xuan, Guo, Lan-Zhe, Li, Yu-Feng
Large language models (LLMs), including both proprietary and open-source models, have showcased remarkable capabilities in addressing a wide range of downstream tasks. Nonetheless, when it comes to practical Chinese legal tasks, these models fail to
Externí odkaz:
http://arxiv.org/abs/2406.04614
Vision-language models (VLMs), such as CLIP, have demonstrated impressive zero-shot capabilities for various downstream tasks. Their performance can be further enhanced through few-shot prompt tuning methods. However, current studies evaluate the per
Externí odkaz:
http://arxiv.org/abs/2406.00345
Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on v
Externí odkaz:
http://arxiv.org/abs/2312.05067
Autor:
Guo, Lan, Niu, YiFei
Our knowledge about neutron star (NS) masses is renewed once again due to the recognition of the heaviest NS PSR J$ 0952-0607 $. By taking advantage of both mass observations of super massive neutron stars and the tidal deformability derived from eve
Externí odkaz:
http://arxiv.org/abs/2311.09792
Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts, enabling zero-shot recognition on downstream tasks. It is usually expected that satisfactory overall accuracy can be achi
Externí odkaz:
http://arxiv.org/abs/2310.03324
Publikováno v:
Open Geosciences, Vol 16, Iss 1, Pp 507-26 (2024)
Many multi-types of unconventional oil and gas reservoirs have been found in some faulted basins in northern China, showing good exploration potential. However, the hydrocarbon accumulation mechanism in these areas is still unclear, which limits the
Externí odkaz:
https://doaj.org/article/ee839112a7884045ba80ce8e7a1a10a1
Autor:
Wang, Yidong, Chen, Hao, Fan, Yue, Sun, Wang, Tao, Ran, Hou, Wenxin, Wang, Renjie, Yang, Linyi, Zhou, Zhi, Guo, Lan-Zhe, Qi, Heli, Wu, Zhen, Li, Yu-Feng, Nakamura, Satoshi, Ye, Wei, Savvides, Marios, Raj, Bhiksha, Shinozaki, Takahiro, Schiele, Bernt, Wang, Jindong, Xie, Xing, Zhang, Yue
Semi-supervised learning (SSL) improves model generalization by leveraging massive unlabeled data to augment limited labeled samples. However, currently, popular SSL evaluation protocols are often constrained to computer vision (CV) tasks. In additio
Externí odkaz:
http://arxiv.org/abs/2208.07204