Zobrazeno 1 - 10
of 106
pro vyhledávání: '"Huang Haizhen"'
Publikováno v:
Case Studies in Construction Materials, Vol 20, Iss , Pp e03270- (2024)
To forecast chloride transport in site by applying accelerated test results, it is necessary to investigate the similarity of chloride diffusivity under different environments. However, limited by test period and cost expenses, a large amount of test
Externí odkaz:
https://doaj.org/article/fea36c4d774e45e498e31228d2418377
Autor:
Jiang, Ting, Song, Minghui, Zhang, Zihan, Huang, Haizhen, Deng, Weiwei, Sun, Feng, Zhang, Qi, Wang, Deqing, Zhuang, Fuzhen
Multimodal large language models (MLLMs) have shown promising advancements in general visual and language understanding. However, the representation of multimodal information using MLLMs remains largely unexplored. In this work, we introduce a new fr
Externí odkaz:
http://arxiv.org/abs/2407.12580
Publikováno v:
E3S Web of Conferences, Vol 409, p 02002 (2023)
With the rapid development of online retail, the drawback of product fit uncertainty in online markets are becoming more and more prominent. In order to alleviate the impact of the product fit uncertainty, online retailers continue to introduce new s
Externí odkaz:
https://doaj.org/article/4d39a29060f643ffae44a8352573faac
Autor:
Liu, Yuxuan, Yang, Tianchi, Zhang, Zihan, Song, Minghui, Huang, Haizhen, Deng, Weiwei, Sun, Feng, Zhang, Qi
Generative retrieval, a promising new paradigm in information retrieval, employs a seq2seq model to encode document features into parameters and decode relevant document identifiers (IDs) based on search queries. Existing generative retrieval solutio
Externí odkaz:
http://arxiv.org/abs/2405.14280
Autor:
Jiang, Ting, Huang, Shaohan, Luo, Shengyue, Zhang, Zihan, Huang, Haizhen, Wei, Furu, Deng, Weiwei, Sun, Feng, Zhang, Qi, Wang, Deqing, Zhuang, Fuzhen
Low-rank adaptation is a popular parameter-efficient fine-tuning method for large language models. In this paper, we analyze the impact of low-rank updating, as implemented in LoRA. Our findings suggest that the low-rank updating mechanism may limit
Externí odkaz:
http://arxiv.org/abs/2405.12130
Autor:
Shi, Shuhua, Huang, Shaohan, Song, Minghui, Li, Zhoujun, Zhang, Zihan, Huang, Haizhen, Wei, Furu, Deng, Weiwei, Sun, Feng, Zhang, Qi
As one of the most popular parameter-efficient fine-tuning (PEFT) methods, low-rank adaptation (LoRA) is commonly applied to fine-tune large language models (LLMs). However, updating the weights of LoRA blocks effectively and expeditiously is challen
Externí odkaz:
http://arxiv.org/abs/2402.18039
Autor:
Liu, Yuxuan, Yang, Tianchi, Huang, Shaohan, Zhang, Zihan, Huang, Haizhen, Wei, Furu, Deng, Weiwei, Sun, Feng, Zhang, Qi
Large language models (LLMs) have emerged as a promising alternative to expensive human evaluations. However, the alignment and coverage of LLM-based evaluations are often limited by the scope and potential bias of the evaluation prompts and criteria
Externí odkaz:
http://arxiv.org/abs/2402.15754
Autor:
Liu, Yuxuan, Yang, Tianchi, Huang, Shaohan, Zhang, Zihan, Huang, Haizhen, Wei, Furu, Deng, Weiwei, Sun, Feng, Zhang, Qi
Diffusion models have demonstrated exceptional capability in generating high-quality images, videos, and audio. Due to their adaptiveness in iterative refinement, they provide a strong potential for achieving better non-autoregressive sequence genera
Externí odkaz:
http://arxiv.org/abs/2402.14843
Autor:
Wang, Zhaoyang, Huang, Shaohan, Liu, Yuxuan, Wang, Jiahai, Song, Minghui, Zhang, Zihan, Huang, Haizhen, Wei, Furu, Deng, Weiwei, Sun, Feng, Zhang, Qi
Large language models (LLMs) exhibit impressive emergent abilities in natural language processing, but their democratization is hindered due to huge computation requirements and closed-source nature. Recent research on advancing open-source smaller L
Externí odkaz:
http://arxiv.org/abs/2310.13332
Autor:
Yang, Tianchi, Song, Minghui, Zhang, Zihan, Huang, Haizhen, Deng, Weiwei, Sun, Feng, Zhang, Qi
Generative retrieval, which is a new advanced paradigm for document retrieval, has recently attracted research interests, since it encodes all documents into the model and directly generates the retrieved documents. However, its power is still underu
Externí odkaz:
http://arxiv.org/abs/2310.12455