Zobrazeno 1 - 10
of 843
pro vyhledávání: '"RAMANATHAN, MURALI"'
Autor:
Kuhar, Sachit, Ahmad, Wasi Uddin, Wang, Zijian, Jain, Nihal, Qian, Haifeng, Ray, Baishakhi, Ramanathan, Murali Krishna, Ma, Xiaofei, Deoras, Anoop
Recent advancements in code completion models have primarily focused on local file contexts. However, these studies do not fully capture the complexity of real-world software development, which often requires the use of rapidly-evolving public librar
Externí odkaz:
http://arxiv.org/abs/2412.04478
In this study, we address the issue of API hallucinations in various software engineering contexts. We introduce CloudAPIBench, a new benchmark designed to measure API hallucination occurrences. CloudAPIBench also provides annotations for frequencies
Externí odkaz:
http://arxiv.org/abs/2407.09726
Autor:
Zhang, Yuhao, Wang, Shiqi, Qian, Haifeng, Wang, Zijian, Shang, Mingyue, Liu, Linbo, Gouda, Sanjay Krishna, Ray, Baishakhi, Ramanathan, Murali Krishna, Ma, Xiaofei, Deoras, Anoop
Code generation models are not robust to small perturbations, which often lead to incorrect generations and significantly degrade the performance of these models. Although improving the robustness of code generation models is crucial to enhancing use
Externí odkaz:
http://arxiv.org/abs/2405.01567
Publikováno v:
Interactive Journal of Medical Research, Vol 10, Iss 1, p e17063 (2021)
Externí odkaz:
https://doaj.org/article/08b63d6c7b9f47e7b39a0f7ea94de4af
Recent advances in retrieval-augmented generation (RAG) have initiated a new era in repository-level code completion. However, the invariable use of retrieval in existing methods exposes issues in both efficiency and robustness, with a large proporti
Externí odkaz:
http://arxiv.org/abs/2403.10059
Autor:
Ryan, Gabriel, Jain, Siddhartha, Shang, Mingyue, Wang, Shiqi, Ma, Xiaofei, Ramanathan, Murali Krishna, Ray, Baishakhi
Testing plays a pivotal role in ensuring software quality, yet conventional Search Based Software Testing (SBST) methods often struggle with complex software units, achieving suboptimal test coverage. Recent works using large language models (LLMs) f
Externí odkaz:
http://arxiv.org/abs/2402.00097
Autor:
Ding, Yangruibo, Wang, Zijian, Ahmad, Wasi Uddin, Ding, Hantian, Tan, Ming, Jain, Nihal, Ramanathan, Murali Krishna, Nallapati, Ramesh, Bhatia, Parminder, Roth, Dan, Xiang, Bing
Code completion models have made significant progress in recent years, yet current popular evaluation datasets, such as HumanEval and MBPP, predominantly focus on code completion tasks within a single file. This over-simplified setting falls short of
Externí odkaz:
http://arxiv.org/abs/2310.11248
Autor:
Yadav, Prateek, Sun, Qing, Ding, Hantian, Li, Xiaopeng, Zhang, Dejiao, Tan, Ming, Ma, Xiaofei, Bhatia, Parminder, Nallapati, Ramesh, Ramanathan, Murali Krishna, Bansal, Mohit, Xiang, Bing
Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance. However, libraries are upgraded or deprecated very frequently and re-training large-scale language models is computationally expensive. Therefore, Conti
Externí odkaz:
http://arxiv.org/abs/2307.02435
Autor:
Ding, Hantian, Kumar, Varun, Tian, Yuchen, Wang, Zijian, Kwiatkowski, Rob, Li, Xiaopeng, Ramanathan, Murali Krishna, Ray, Baishakhi, Bhatia, Parminder, Sengupta, Sudipta, Roth, Dan, Xiang, Bing
Large language models trained on code have shown great potential to increase productivity of software developers. Several execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming p
Externí odkaz:
http://arxiv.org/abs/2306.03203
Autor:
Wei, Xiaokai, Gonugondla, Sujan, Ahmad, Wasi, Wang, Shiqi, Ray, Baishakhi, Qian, Haifeng, Li, Xiaopeng, Kumar, Varun, Wang, Zijian, Tian, Yuchen, Sun, Qing, Athiwaratkun, Ben, Shang, Mingyue, Ramanathan, Murali Krishna, Bhatia, Parminder, Xiang, Bing
ML-powered code generation aims to assist developers to write code in a more productive manner, by intelligently generating code blocks based on natural language prompts. Recently, large pretrained deep learning models have substantially pushed the b
Externí odkaz:
http://arxiv.org/abs/2303.05378