Zobrazeno 1 - 10
of 201
pro vyhledávání: '"Chen Yangyi"'
Publikováno v:
e-Polymers, Vol 24, Iss 1, Pp 8312-9 (2024)
The structure and transition behavior of crosslinked thermo-responsive poly(2-(2-methoxyethoxy) ethylmethacrylate-co-(ethyleneglycol) methacrylate) (P(MEO2MA-co-EGMA360)) gel film on a flat cellulosic-based substrate were investigated. The regenerate
Externí odkaz:
https://doaj.org/article/5b4f78d5172045a0aae5e8a58d8e7ed8
Publikováno v:
International Journal of Aerospace Engineering, Vol 2024 (2024)
Compressive sampling matching pursuit (CoSaMP), as a conventional algorithm requiring system sparsity and sensitive to step size, was improved in this paper by approximating the sparsity with adaptive variable step size. In the proposed algorithm (Co
Externí odkaz:
https://doaj.org/article/56a8b748c32e4ea681316c6ba8fb5a95
Precise estimation of downstream performance in large language models (LLMs) prior to training is essential for guiding their development process. Scaling laws analysis utilizes the statistics of a series of significantly smaller sampling language mo
Externí odkaz:
http://arxiv.org/abs/2410.08527
We present SOLO, a single transformer for Scalable visiOn-Language mOdeling. Current large vision-language models (LVLMs) such as LLaVA mostly employ heterogeneous architectures that connect pre-trained visual encoders with large language models (LLM
Externí odkaz:
http://arxiv.org/abs/2407.06438
Large language models (LLMs) often generate inaccurate or fabricated information and generally fail to indicate their confidence, which limits their broader applications. Previous work elicits confidence from LLMs by direct or self-consistency prompt
Externí odkaz:
http://arxiv.org/abs/2405.20974
Large Language Model (LLM) agents, capable of performing a broad range of actions, such as invoking tools and controlling robots, show great potential in tackling real-world challenges. LLM agents are typically prompted to produce actions by generati
Externí odkaz:
http://arxiv.org/abs/2402.01030
State-of-the-art vision-language models (VLMs) still have limited performance in structural knowledge extraction, such as relations between objects. In this work, we present ViStruct, a training framework to learn VLMs for effective visual structural
Externí odkaz:
http://arxiv.org/abs/2311.13258
Can large language models (LLMs) express their uncertainty in situations where they lack sufficient parametric knowledge to generate reasonable responses? This work aims to systematically investigate LLMs' behaviors in such situations, emphasizing th
Externí odkaz:
http://arxiv.org/abs/2311.09731
Autor:
Zhang, Hanning, Diao, Shizhe, Lin, Yong, Fung, Yi R., Lian, Qing, Wang, Xingyao, Chen, Yangyi, Ji, Heng, Zhang, Tong
Large language models (LLMs) have revolutionized numerous domains with their impressive performance but still face their challenges. A predominant issue is the propensity for these models to generate non-existent facts, a concern termed hallucination
Externí odkaz:
http://arxiv.org/abs/2311.09677
We present DRESS, a large vision language model (LVLM) that innovatively exploits Natural Language feedback (NLF) from Large Language Models to enhance its alignment and interactions by addressing two key limitations in the state-of-the-art LVLMs. Fi
Externí odkaz:
http://arxiv.org/abs/2311.10081