Zobrazeno 1 - 10
of 114 696
pro vyhledávání: '"Srikanth"'
Autor:
Bozkurt, Burcu1,2, Planey, Arrianna Marie1,2, Aijaz, Monisa1,2, Weinstein, Joshua M.1,2, Cilenti, Dorothy3, Shea, Christopher M.1,2, Khairat, Saif2,4 Saif@unc.edu
Publikováno v:
Permanente Journal. 6/14/2024, Vol. 28 Issue 2, p36-46. 11p.
Publikováno v:
Alg. Number Th. 15 (2021) 1157-1180
We investigate cohomological support varieties for finite-dimensional Lie superalgebras defined over fields of odd characteristic. Verifying a conjecture from our previous work, we show the support variety of a finite-dimensional supermodule can be r
Externí odkaz:
http://arxiv.org/abs/1912.07117
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Cai, Feiyang, Zhu, Tianyu, Tzeng, Tzuen-Rong, Duan, Yongping, Liu, Ling, Pilla, Srikanth, Li, Gang, Luo, Feng
Artificial intelligence (AI) has significantly advanced computational chemistry research. However, traditional AI methods often rely on task-specific model designs and training, which constrain both the scalability of model size and generalization ac
Externí odkaz:
http://arxiv.org/abs/2410.21422
Efficiently deriving structured workflows from unannotated dialogs remains an underexplored and formidable challenge in computational linguistics. Automating this process could significantly accelerate the manual design of workflows in new domains an
Externí odkaz:
http://arxiv.org/abs/2410.18481
Autor:
Shahriar, Sadat, Qi, Zheng, Pappas, Nikolaos, Doss, Srikanth, Sunkara, Monica, Halder, Kishaloy, Mager, Manuel, Benajiba, Yassine
Aligning Large Language Models (LLM) to address subjectivity and nuanced preference levels requires adequate flexibility and control, which can be a resource-intensive and time-consuming procedure. Existing training-time alignment methods require ful
Externí odkaz:
http://arxiv.org/abs/2410.19206
Autor:
Maharaj, Kishan, Munigala, Vitobha, Tamilselvam, Srikanth G., Kumar, Prince, Sen, Sayandeep, Kodeswaran, Palani, Mishra, Abhijit, Bhattacharyya, Pushpak
Recent advancements in large language models (LLMs) have significantly enhanced their ability to understand both natural language and code, driving their use in tasks like natural language-to-code (NL2Code) and code summarization. However, LLMs are p
Externí odkaz:
http://arxiv.org/abs/2410.14748
Autor:
Krishna, Rahul, Pan, Rangeet, Pavuluri, Raju, Tamilselvam, Srikanth, Vukovic, Maja, Sinha, Saurabh
Large Language Models for Code (or code LLMs) are increasingly gaining popularity and capabilities, offering a wide array of functionalities such as code completion, code generation, code summarization, test generation, code translation, and more. To
Externí odkaz:
http://arxiv.org/abs/2410.13007
Autor:
Liu, Qin, Shang, Chao, Liu, Ling, Pappas, Nikolaos, Ma, Jie, John, Neha Anna, Doss, Srikanth, Marquez, Lluis, Ballesteros, Miguel, Benajiba, Yassine
The safety alignment ability of Vision-Language Models (VLMs) is prone to be degraded by the integration of the vision module compared to its LLM backbone. We investigate this phenomenon, dubbed as ''safety alignment degradation'' in this paper, and
Externí odkaz:
http://arxiv.org/abs/2410.09047
Autor:
Behera, Agnish Kumar, Du, Matthew, Jagadisan, Uday, Sastry, Srikanth, Rao, Madan, Vaikuntanathan, Suriyanarayanan
The classic paradigms for learning and memory recall focus on strengths of synaptic couplings and how these can be modulated to encode memories. In a previous paper [A. K. Behera, M. Rao, S. Sastry, and S. Vaikuntanathan, Physical Review X 13, 041043
Externí odkaz:
http://arxiv.org/abs/2410.06269