Zobrazeno 1 - 10
of 3 688
pro vyhledávání: '"Parthasarathi, P."'
The growth in prominence of large language models (LLMs) in everyday life can be largely attributed to their generative abilities, yet some of this is also owed to the risks and costs associated with their use. On one front is their tendency to \text
Externí odkaz:
http://arxiv.org/abs/2410.17477
Autor:
Majumdar, Parthasarathi
We examine possible additive corrections to the Bekenstein-Hawking (BH) entropy of black holes due to very general classical and quantal modifications of general relativity. In general, black hole entropy is subject to the Generalized Second Law of T
Externí odkaz:
http://arxiv.org/abs/2408.13820
Despite their widespread adoption, large language models (LLMs) remain prohibitive to use under resource constraints, with their ever growing sizes only increasing the barrier for use. One noted issue is the high latency associated with auto-regressi
Externí odkaz:
http://arxiv.org/abs/2408.08470
Locations of DNA replication initiation in prokaryotes, called "origins of replication", are well-characterized. However, a mechanistic understanding of the sequence-dependence of the local unzipping of double-stranded DNA, the first step towards rep
Externí odkaz:
http://arxiv.org/abs/2407.13260
Autor:
Dehghan, Mohammad, Alomrani, Mohammad Ali, Bagga, Sunyam, Alfonso-Hermelo, David, Bibi, Khalil, Ghaddar, Abbas, Zhang, Yingxue, Li, Xiaoguang, Hao, Jianye, Liu, Qun, Lin, Jimmy, Chen, Boxing, Parthasarathi, Prasanna, Biparva, Mahdi, Rezagholizadeh, Mehdi
The emerging citation-based QA systems are gaining more attention especially in generative AI search applications. The importance of extracted knowledge provided to these systems is vital from both accuracy (completeness of information) and efficienc
Externí odkaz:
http://arxiv.org/abs/2406.10393
Autor:
Ghaddar, Abbas, Alfonso-Hermelo, David, Langlais, Philippe, Rezagholizadeh, Mehdi, Chen, Boxing, Parthasarathi, Prasanna
In this work, we dive deep into one of the popular knowledge-grounded dialogue benchmarks that focus on faithfulness, FaithDial. We show that a significant portion of the FaithDial data contains annotation artifacts, which may bias models towards com
Externí odkaz:
http://arxiv.org/abs/2405.15110
Large language models (LLMs) show an innate skill for solving language based tasks. But insights have suggested an inability to adjust for information or task-solving skills becoming outdated, as their knowledge, stored directly within their paramete
Externí odkaz:
http://arxiv.org/abs/2404.09339
Autor:
Srinivasan, Bama, Nehru, Mala, Parthasarathi, Ranjani, Mukherjee, Saswati, Thankachan, Jeena A
This paper provides a few approaches to automating computer programming and project submission tasks, that we have been following for the last six years and have found to be successful. The approaches include using CodeRunner with Learning Management
Externí odkaz:
http://arxiv.org/abs/2404.04521
Publikováno v:
Physical Review D 110, L021701 (2024)
Inspired by the pioneering 1968 work of L Parker, demonstrating matter quanta production in a dynamical spacetime background, we consider production of scalar quanta in a gravitational wave background. Choosing the spacetime to be a flat spacetime pe
Externí odkaz:
http://arxiv.org/abs/2404.01840
Autor:
Sudhakar, Arjun Vaithilingam, Parthasarathi, Prasanna, Rajendran, Janarthanan, Chandar, Sarath
Large Language Models (LLMs) have demonstrated superior performance in language understanding benchmarks. CALM, a popular approach, leverages linguistic priors of LLMs -- GPT-2 -- for action candidate recommendations to improve the performance in tex
Externí odkaz:
http://arxiv.org/abs/2311.07687