Zobrazeno 1 - 10
of 1 464
pro vyhledávání: '"Lim, Ee"'
The recent success of large language models (LLMs) has attracted widespread interest to develop role-playing conversational agents personalized to the characteristics and styles of different speakers to enhance their abilities to perform both general
Externí odkaz:
http://arxiv.org/abs/2405.10150
In the realm of food computing, segmenting ingredients from images poses substantial challenges due to the large intra-class variance among the same ingredients, the emergence of new ingredients, and the high annotation costs associated with large fo
Externí odkaz:
http://arxiv.org/abs/2404.01409
Autor:
Wang, Lei, Lim, Ee-Peng
Large language models (LLMs) have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction for
Externí odkaz:
http://arxiv.org/abs/2403.10135
Autor:
Wang, Lei, Xu, Wanyu, Hu, Zhiqiang, Lan, Yihuai, Dong, Shan, Wang, Hao, Lee, Roy Ka-Wei, Lim, Ee-Peng
This paper introduces a new in-context learning (ICL) mechanism called In-Image Learning (I$^2$L) that combines demonstration examples, visual cues, and chain-of-thought reasoning into an aggregated image to enhance the capabilities of Large Multimod
Externí odkaz:
http://arxiv.org/abs/2402.17971
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal th
Externí odkaz:
http://arxiv.org/abs/2402.11887
Large language models (LLMs) have shown remarkable performance in natural language processing (NLP) tasks. To comprehend and execute diverse human instructions over image data, instruction-tuned large vision-language models (LVLMs) have been introduc
Externí odkaz:
http://arxiv.org/abs/2312.01701
This paper examines the capacity of LLMs to reason with knowledge graphs using their internal knowledge graph, i.e., the knowledge graph they learned during pre-training. Two research questions are formulated to investigate the accuracy of LLMs in re
Externí odkaz:
http://arxiv.org/abs/2312.00353
Autor:
Lan, Yihuai, Hu, Zhiqiang, Wang, Lei, Wang, Yang, Ye, Deheng, Zhao, Peilin, Lim, Ee-Peng, Xiong, Hui, Wang, Hao
This paper aims to investigate the open research problem of uncovering the social behaviors of LLM-based agents. To achieve this goal, we adopt Avalon, a representative communication game, as the environment and use system prompts to guide LLM agents
Externí odkaz:
http://arxiv.org/abs/2310.14985
Data visualization is a powerful tool for exploring and communicating insights in various domains. To automate visualization choice for datasets, a task known as visualization recommendation has been proposed. Various machine-learning-based approache
Externí odkaz:
http://arxiv.org/abs/2310.07652
Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demon
Externí odkaz:
http://arxiv.org/abs/2305.04091