Zobrazeno 1 - 10
of 161
pro vyhledávání: '"Sun Jianling"'
Selecting the best code solution from multiple generated ones is an essential task in code generation, which can be achieved by using some reliable validators (e.g., developer-written test cases) for assistance. Since reliable test cases are not alwa
Externí odkaz:
http://arxiv.org/abs/2409.08692
Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either repurpose large language models (LLMs) or build large-scale time series datasets to develop TSF foundation models for universal foreca
Externí odkaz:
http://arxiv.org/abs/2408.17253
Utilizing code snippets on Stack Overflow (SO) is a common practice among developers for problem-solving. Although SO code snippets serve as valuable resources, it is important to acknowledge their imperfections, reusing problematic code snippets can
Externí odkaz:
http://arxiv.org/abs/2408.09095
Autor:
Peng, Xinyu, Han, Feng, Peng, Li, Liu, Weiran, Yan, Zheng, Kang, Kai, Zhang, Xinyuan, Wei, Guoxing, Sun, Jianling, Liu, Jinfei
This paper introduces MapComp, a novel view-based framework to facilitate join-group-aggregation (JGA) queries for collaborative analytics. Through specially crafted materialized view for join and novel design of group-aggregation (GA) protocols, Map
Externí odkaz:
http://arxiv.org/abs/2408.01246
Software vulnerabilities pose significant risks to the security and integrity of software systems. Prior studies have proposed various approaches to vulnerability detection using deep learning or pre-trained models. However, there is still a lack of
Externí odkaz:
http://arxiv.org/abs/2406.09701
Inspired by the great potential of Large Language Models (LLMs) for solving complex coding tasks, in this paper, we propose a novel approach, named Code2API, to automatically perform APIzation for Stack Overflow code snippets. Code2API does not requi
Externí odkaz:
http://arxiv.org/abs/2405.03509
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for adapting pre-trained language models to various tasks efficiently. Recently, there has been a growing interest in transferring knowledge from one or multiple tasks to the d
Externí odkaz:
http://arxiv.org/abs/2402.15082
The integration of Large Language Models (LLMs) with Graph Representation Learning (GRL) marks a significant evolution in analyzing complex data structures. This collaboration harnesses the sophisticated linguistic capabilities of LLMs to improve the
Externí odkaz:
http://arxiv.org/abs/2402.05952
While existing code large language models (code LLMs) exhibit impressive capabilities in code generation, their autoregressive sequential generation inherently lacks reversibility. This limitation hinders them from timely correcting previous missing
Externí odkaz:
http://arxiv.org/abs/2401.07870
Recent research has demonstrated the efficacy of pre-training graph neural networks (GNNs) to capture the transferable graph semantics and enhance the performance of various downstream tasks. However, the semantic knowledge learned from pretext tasks
Externí odkaz:
http://arxiv.org/abs/2310.14845