Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Ibrahimzada, Ali Reza"'
Solely relying on test passing to evaluate Large Language Models (LLMs) for code synthesis may result in unfair assessment or promoting models with data leakage. As an alternative, we introduce CodeMind, a framework designed to gauge the code reasoni
Externí odkaz:
http://arxiv.org/abs/2402.09664
Autor:
Ibrahimzada, Ali Reza
The rising popularity of Large Language Models (LLMs) has motivated exploring their use in code-related tasks. Code LLMs with more than millions of parameters are trained on a massive amount of code in different Programming Languages (PLs). Such mode
Externí odkaz:
http://arxiv.org/abs/2401.12412
Bugs are essential in software engineering; many research studies in the past decades have been proposed to detect, localize, and repair bugs in software systems. Effectiveness evaluation of such techniques requires complex bugs, i.e., those that are
Externí odkaz:
http://arxiv.org/abs/2310.02407
Autor:
Pan, Rangeet, Ibrahimzada, Ali Reza, Krishna, Rahul, Sankar, Divya, Wassi, Lambert Pouguem, Merler, Michele, Sobolev, Boris, Pavuluri, Raju, Sinha, Saurabh, Jabbarvand, Reyhaneh
Code translation aims to convert source code from one programming language (PL) to another. Given the promising abilities of large language models (LLMs) in code synthesis, researchers are exploring their potential to automate code translation. The p
Externí odkaz:
http://arxiv.org/abs/2308.03109
Automation of test oracles is one of the most challenging facets of software testing, but remains comparatively less addressed compared to automated test input generation. Test oracles rely on a ground-truth that can distinguish between the correct a
Externí odkaz:
http://arxiv.org/abs/2302.01488
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.