Zobrazeno 1 - 10
of 1 709
pro vyhledávání: '"Ravichander"'
Autor:
Srinath Reddy Mannem, Chiruvella Mallikarjuna, Enganti Bhavatej, N Bendigeri Mohammed Taif, Oleti Ravichander, M Ghouse Syed
Publikováno v:
Indian Journal of Urology, Vol 38, Iss 3, Pp 245-246 (2022)
Externí odkaz:
https://doaj.org/article/0cc276a0f0554d79a2ffc79cdce172b4
Autor:
Rezaei, Keivan, Chandu, Khyathi, Feizi, Soheil, Choi, Yejin, Brahman, Faeze, Ravichander, Abhilasha
Large language models trained on web-scale corpora can memorize undesirable datapoints such as incorrect facts, copyrighted content or sensitive data. Recently, many machine unlearning algorithms have been proposed that aim to `erase' these datapoint
Externí odkaz:
http://arxiv.org/abs/2411.00204
Autor:
Balepur, Nishant, Gu, Feng, Ravichander, Abhilasha, Feng, Shi, Boyd-Graber, Jordan, Rudinger, Rachel
Question answering (QA)-producing correct answers for input questions-is popular, but we test a reverse question answering (RQA) task: given an input answer, generate a question with that answer. Past work tests QA and RQA separately, but we test the
Externí odkaz:
http://arxiv.org/abs/2410.15512
Autor:
Mannem SR; Department of Urology, Asian Institute of Nephrology and Urology, Hyderabad, Telangana, India., Mallikarjuna C; Department of Urology, Asian Institute of Nephrology and Urology, Hyderabad, Telangana, India., Bhavatej E; Department of Urology, Asian Institute of Nephrology and Urology, Hyderabad, Telangana, India., Taif NBM; Department of Urology, Asian Institute of Nephrology and Urology, Hyderabad, Telangana, India., Ravichander O; Department of Urology, Asian Institute of Nephrology and Urology, Hyderabad, Telangana, India., Syed MG; Department of Urology, Asian Institute of Nephrology and Urology, Hyderabad, Telangana, India.
Publikováno v:
Indian journal of urology : IJU : journal of the Urological Society of India [Indian J Urol] 2022 Jul-Sep; Vol. 38 (3), pp. 245-246. Date of Electronic Publication: 2022 Jul 01.
Autor:
Zhao, Wenting, Goyal, Tanya, Chiu, Yu Ying, Jiang, Liwei, Newman, Benjamin, Ravichander, Abhilasha, Chandu, Khyathi, Bras, Ronan Le, Cardie, Claire, Deng, Yuntian, Choi, Yejin
While hallucinations of large language models (LLMs) prevail as a major challenge, existing evaluation benchmarks on factuality do not cover the diverse domains of knowledge that the real-world users of LLMs seek information about. To bridge this gap
Externí odkaz:
http://arxiv.org/abs/2407.17468
Autor:
Brahman, Faeze, Kumar, Sachin, Balachandran, Vidhisha, Dasigi, Pradeep, Pyatkin, Valentina, Ravichander, Abhilasha, Wiegreffe, Sarah, Dziri, Nouha, Chandu, Khyathi, Hessel, Jack, Tsvetkov, Yulia, Smith, Noah A., Choi, Yejin, Hajishirzi, Hannaneh
Chat-based language models are designed to be helpful, yet they should not comply with every user request. While most existing work primarily focuses on refusal of "unsafe" queries, we posit that the scope of noncompliance should be broadened. We int
Externí odkaz:
http://arxiv.org/abs/2407.12043
Autor:
Lin, Bill Yuchen, Deng, Yuntian, Chandu, Khyathi, Brahman, Faeze, Ravichander, Abhilasha, Pyatkin, Valentina, Dziri, Nouha, Bras, Ronan Le, Choi, Yejin
We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs) using challenging, real-world user queries. WildBench consists of 1,024 tasks carefully selected from over one million human-chatbot conversa
Externí odkaz:
http://arxiv.org/abs/2406.04770
Publikováno v:
Indian Journal of Urology, Vol 38, Iss 3, Pp 244-245 (2022)
Externí odkaz:
https://doaj.org/article/8a5044d1d6ae4553911381983de1a965
Multiple-choice question answering (MCQA) is often used to evaluate large language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if LLMs can perform MCQA with choices-only prompts, where models must select the correct answer only
Externí odkaz:
http://arxiv.org/abs/2402.12483
Autor:
Groeneveld, Dirk, Beltagy, Iz, Walsh, Pete, Bhagia, Akshita, Kinney, Rodney, Tafjord, Oyvind, Jha, Ananya Harsh, Ivison, Hamish, Magnusson, Ian, Wang, Yizhong, Arora, Shane, Atkinson, David, Authur, Russell, Chandu, Khyathi Raghavi, Cohan, Arman, Dumas, Jennifer, Elazar, Yanai, Gu, Yuling, Hessel, Jack, Khot, Tushar, Merrill, William, Morrison, Jacob, Muennighoff, Niklas, Naik, Aakanksha, Nam, Crystal, Peters, Matthew E., Pyatkin, Valentina, Ravichander, Abhilasha, Schwenk, Dustin, Shah, Saurabh, Smith, Will, Strubell, Emma, Subramani, Nishant, Wortsman, Mitchell, Dasigi, Pradeep, Lambert, Nathan, Richardson, Kyle, Zettlemoyer, Luke, Dodge, Jesse, Lo, Kyle, Soldaini, Luca, Smith, Noah A., Hajishirzi, Hannaneh
Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important det
Externí odkaz:
http://arxiv.org/abs/2402.00838