Zobrazeno 1 - 10
of 7 382
pro vyhledávání: '"A. Sitaram"'
Autor:
Hsu, Aliyah R., Zhu, James, Wang, Zhichao, Bi, Bin, Mehrotra, Shubham, Pentyala, Shiva K., Tan, Katherine, Mao, Xiang-Bo, Omrani, Roshanak, Chaudhuri, Sougata, Radhakrishnan, Regunathan, Asur, Sitaram, Cheng, Claire Na, Yu, Bin
LLMs have demonstrated impressive proficiency in generating coherent and high-quality text, making them valuable across a range of text-generation tasks. However, rigorous evaluation of this generated content is crucial, as ensuring its quality remai
Externí odkaz:
http://arxiv.org/abs/2411.02448
Benchmark contamination refers to the presence of test datasets in Large Language Model (LLM) pre-training or post-training data. Contamination can lead to inflated scores on benchmarks, compromising evaluation results and making it difficult to dete
Externí odkaz:
http://arxiv.org/abs/2410.16186
Large Language Models (LLMs) demonstrate exceptional capabilities in a multitude of NLP tasks. However, the efficacy of such models to languages other than English is often limited. Prior works have shown that encoder-only models such as BERT or XLM-
Externí odkaz:
http://arxiv.org/abs/2410.16168
A common challenge towards the adaptability of Large Language Models (LLMs) is their ability to learn new languages over time without hampering the model's performance on languages in which the model is already proficient (usually English). Continual
Externí odkaz:
http://arxiv.org/abs/2410.16006
Assessing the capabilities and limitations of large language models (LLMs) has garnered significant interest, yet the evaluation of multiple models in real-world scenarios remains rare. Multilingual evaluation often relies on translated benchmarks, w
Externí odkaz:
http://arxiv.org/abs/2410.13671
Information in speech can be divided into two categories: what is being said (content) and how it is expressed (other). Current state-of-the-art (SOTA) techniques model speech at fixed segments, usually 10-25 ms, using a single embedding. Given the o
Externí odkaz:
http://arxiv.org/abs/2410.11086
Autor:
Dubey, Swadheen, Kazakov, Georgy A., Heizenreder, Benedikt, Zhou, Sheng, Bennetts, Shayne, Schäffer, Stefan Alaric, Sitaram, Ananya, Schreck, Florian
Continuous superradiance using a narrow optical transition has the potential to improve the short-term stability of state-of-the-art optical clocks. Even though pulsed superradiant emission on a mHz linewidth clock transition has been shown, true con
Externí odkaz:
http://arxiv.org/abs/2409.06575
The Aharonov-Bohm (AB) caging is the phenomenon of extreme localization of particles experiencing magnetic field in certain tight binding lattices. While the AB caging involves the localization of non-interacting particles, it often breaks down due t
Externí odkaz:
http://arxiv.org/abs/2409.05853
Autor:
Ramakrishnan, Sitaram, Yamakawa, Tatsuya, Oishi, Ryohei, Yamane, Soichiro, Ikeda, Atsutoshi, Kado, Masaki, Shimura, Yasuyuki, Takabatake, Toshiro, Onimaru, Takahiro, Shibata, Yasuhiro, Thamizhavel, Arumugam, Ramakrishnan, Srinivasan, Yonezawa, Shingo, Nohara, Minoru
We report the crystal structures and superconductivity (SC) of LaPt$_{x}$Si$_{2-x}$ ($0.5 \leq x \leq 1.0$) that are solid solutions of LaSi$_{2}$ and LaPtSi with centrosymmetric tetragonal ($I4_{1}/amd$, $D_{4h}^{19}$, \#141) and non-centrosymmetric
Externí odkaz:
http://arxiv.org/abs/2408.17033
Autor:
Wang, Zhichao, Bi, Bin, Huang, Can, Pentyala, Shiva Kumar, Zhu, Zixu James, Asur, Sitaram, Cheng, Na Claire
An LLM is pretrained on trillions of tokens, but the pretrained LLM may still generate undesired responses. To solve this problem, alignment techniques such as RLHF, DPO and KTO are proposed. However, these alignment techniques have limitations. For
Externí odkaz:
http://arxiv.org/abs/2408.15339