Zobrazeno 1 - 10
of 552
pro vyhledávání: '"LI, Yongqi"'
Speculative decoding (SD) has emerged as a widely used paradigm to accelerate the inference of large language models (LLMs) without compromising generation quality. It works by first employing a compact model to draft multiple tokens efficiently and
Externí odkaz:
http://arxiv.org/abs/2410.06916
Autor:
Lin, Xinyu, Yang, Chaoqun, Wang, Wenjie, Li, Yongqi, Du, Cunxiao, Feng, Fuli, Ng, See-Kiong, Chua, Tat-Seng
Large Language Model (LLM)-based generative recommendation has achieved notable success, yet its practical deployment is costly particularly due to excessive inference latency caused by autoregressive decoding. For lossless LLM decoding acceleration,
Externí odkaz:
http://arxiv.org/abs/2410.05165
Autor:
Li, Yongqi, Cai, Hongru, Wang, Wenjie, Qu, Leigang, Wei, Yinwei, Li, Wenjie, Nie, Liqiang, Chua, Tat-Seng
Text-to-image retrieval is a fundamental task in multimedia processing, aiming to retrieve semantically relevant cross-modal content. Traditional studies have typically approached this task as a discriminative problem, matching the text and image via
Externí odkaz:
http://arxiv.org/abs/2407.17274
Tool learning aims to enhance and expand large language models' (LLMs) capabilities with external tools, which has gained significant attention recently. Current methods have shown that LLMs can effectively handle a certain amount of tools through in
Externí odkaz:
http://arxiv.org/abs/2406.17465
How humans can efficiently and effectively acquire images has always been a perennial question. A typical solution is text-to-image retrieval from an existing database given the text query; however, the limited database typically lacks creativity. By
Externí odkaz:
http://arxiv.org/abs/2406.05814
Autor:
Wang, Wenjie, Bao, Honghui, Lin, Xinyu, Zhang, Jizhi, Li, Yongqi, Feng, Fuli, Ng, See-Kiong, Chua, Tat-Seng
Utilizing powerful Large Language Models (LLMs) for generative recommendation has attracted much attention. Nevertheless, a crucial challenge is transforming recommendation data into the language space of LLMs through effective item tokenization. Cur
Externí odkaz:
http://arxiv.org/abs/2405.07314
Autor:
Li, Yongqi, Lin, Xinyu, Wang, Wenjie, Feng, Fuli, Pang, Liang, Li, Wenjie, Nie, Liqiang, He, Xiangnan, Chua, Tat-Seng
With the information explosion on the Web, search and recommendation are foundational infrastructures to satisfying users' information needs. As the two sides of the same coin, both revolve around the same core research problem, matching queries with
Externí odkaz:
http://arxiv.org/abs/2404.16924
Training deep neural networks is a challenging task. In order to speed up training and enhance the performance of deep neural networks, we rectify the vanilla conjugate gradient as conjugate-gradient-like and incorporate it into the generic Adam, and
Externí odkaz:
http://arxiv.org/abs/2404.01714
Despite advancements in text-to-image generation (T2I), prior methods often face text-image misalignment problems such as relation confusion in generated images. Existing solutions involve cross-attention manipulation for better compositional underst
Externí odkaz:
http://arxiv.org/abs/2403.04321
The recent advancements in generative language models have demonstrated their ability to memorize knowledge from documents and recall knowledge to respond to user queries effectively. Building upon this capability, we propose to enable multimodal lar
Externí odkaz:
http://arxiv.org/abs/2402.10805