Zobrazeno 1 - 10
of 99 375
pro vyhledávání: '"A. Bhatia"'
Autor:
SHRAVANI SANYAL, BIDISHA CHAKRABARTI, A. BHATIA, S. NARESH KUMAR, T.J. PURAKAYASTHA, DINESH KUMAR, PRAGATI PRAMANIK, S. KANNOJIYA, A. SHARMA, V. KUMAR
Publikováno v:
Journal of Agrometeorology, Vol 25, Iss 4 (2023)
An experiment was undertaken during rabi season of 2020-2021 and 2021-2022 at experimental field of Division of Environmental Science, ICAR-Indian Agriculture Research Institute (IARI), New Delhi inside Open Top Chambers (OTCs) to study the growth an
Externí odkaz:
https://doaj.org/article/b772c6dc2c1945989787eb3e1f7ea42a
Autor:
PRIYA BHATTACHARYA, K.K. BANDYOPADHYAY, P. KRISHNAN, P.P. MAITY, T.J. PURAKAYASTHA, A. BHATIA, B. CHAKRABORTY, S.N. KUMAR, SUJAN ADAK, RITU TOMER, MEENAKSHI
Publikováno v:
Journal of Agrometeorology, Vol 25, Iss 4 (2023)
A two-year field study was carried out at the Indian Agricultural Research Institute New Delhi, from rabi 2020-21 to 2021-22, with the aim of examining the impacts of tillage and residue management on yield, greenhouse gases (GHGs) emissions, global
Externí odkaz:
https://doaj.org/article/90b43b37210e44f2b159385cac89a588
Large language models (LLMs) can learn vast amounts of knowledge from diverse domains during pre-training. However, long-tail knowledge from specialized domains is often scarce and underrepresented, rarely appearing in the models' memorization. Prior
Externí odkaz:
http://arxiv.org/abs/2410.23605
We study Liouville quantum gravity (LQG) in the supercritical (a.k.a. strongly coupled) phase, which has background charge $Q \in (0,2)$ and central charge $\mathbf{c}_{\mathrm{L}} = 1+6Q^2 \in (1,25)$. Recent works have shown how to define LQG in th
Externí odkaz:
http://arxiv.org/abs/2410.12693
Autor:
Sarukkai, Vishnu, Shacklett, Brennan, Majercik, Zander, Bhatia, Kush, Ré, Christopher, Fatahalian, Kayvon
Large Language Models (LLMs) have the potential to automate reward engineering by leveraging their broad domain knowledge across various tasks. However, they often need many iterations of trial-and-error to generate effective reward functions. This p
Externí odkaz:
http://arxiv.org/abs/2410.09187
Fine-tuning large language models (LLMs) on instruction datasets is a common way to improve their generative capabilities. However, instruction datasets can be expensive and time-consuming to manually curate, and while LLM-generated data is less labo
Externí odkaz:
http://arxiv.org/abs/2410.05224
Autor:
Jiang, Pengcheng, Xiao, Cao, Jiang, Minhao, Bhatia, Parminder, Kass-Hout, Taha, Sun, Jimeng, Han, Jiawei
Large language models (LLMs) have demonstrated significant potential in clinical decision support. Yet LLMs still suffer from hallucinations and lack fine-grained contextual medical knowledge, limiting their high-stake healthcare applications such as
Externí odkaz:
http://arxiv.org/abs/2410.04585
Parameter Efficient Fine-Tuning (PEFT) offers an efficient solution for fine-tuning large pretrained language models for downstream tasks. However, most PEFT strategies are manually designed, often resulting in suboptimal performance. Recent automati
Externí odkaz:
http://arxiv.org/abs/2410.09079
Autor:
Chiu, Yu Ying, Jiang, Liwei, Lin, Bill Yuchen, Park, Chan Young, Li, Shuyue Stella, Ravi, Sahithya, Bhatia, Mehar, Antoniak, Maria, Tsvetkov, Yulia, Shwartz, Vered, Choi, Yejin
To make large language models (LLMs) more helpful across diverse cultures, it is essential to have effective cultural knowledge benchmarks to measure and track our progress. Effective benchmarks need to be robust, diverse, and challenging. We introdu
Externí odkaz:
http://arxiv.org/abs/2410.02677
Future communication systems are anticipated to facilitate applications requiring high data transmission rates while maintaining energy efficiency. Hexagonal quadrature amplitude modulation (HQAM) offers this owing to its compact symbol arrangement w
Externí odkaz:
http://arxiv.org/abs/2410.02661