Zobrazeno 1 - 10
of 650
pro vyhledávání: '"Ravichander"'
Autor:
Sandip Bhattacharya, Mohammed Imran Hussain, John Ajayan, Shubham Tayal, Louis Maria Irudaya Leo Joseph, Sreedhar Kollemm, Usha Desai, Syed Musthak Ahmed, Ravichander Janapati
Publikováno v:
ETRI Journal, Vol 45, Iss 5, Pp 910-921 (2023)
In this study, we designed a 6T-SRAM cell using 16-nm CMOS process and analyzed the performance in terms of read-speed latency. The temperaturedependent Cu and multilayered graphene nanoribbon (MLGNR)-based nanointerconnect materials is used througho
Externí odkaz:
https://doaj.org/article/06b05190d72c4c72b7c5f0654fa309e9
Publikováno v:
IBRO Neuroscience Reports, Vol 15, Iss , Pp S894- (2023)
Externí odkaz:
https://doaj.org/article/59d22c092e1041fa8258b7c9f8a57d12
Publikováno v:
IBRO Neuroscience Reports, Vol 15, Iss , Pp S798- (2023)
Externí odkaz:
https://doaj.org/article/4844d0d7a2a645348efc4b74cb70736c
Autor:
Rezaei, Keivan, Chandu, Khyathi, Feizi, Soheil, Choi, Yejin, Brahman, Faeze, Ravichander, Abhilasha
Large language models trained on web-scale corpora can memorize undesirable datapoints such as incorrect facts, copyrighted content or sensitive data. Recently, many machine unlearning methods have been proposed that aim to 'erase' these datapoints f
Externí odkaz:
http://arxiv.org/abs/2411.00204
Autor:
Balepur, Nishant, Gu, Feng, Ravichander, Abhilasha, Feng, Shi, Boyd-Graber, Jordan, Rudinger, Rachel
Question answering (QA)-producing correct answers for input questions-is popular, but we test a reverse question answering (RQA) task: given an input answer, generate a question with that answer. Past work tests QA and RQA separately, but we test the
Externí odkaz:
http://arxiv.org/abs/2410.15512
Autor:
Zhao, Wenting, Goyal, Tanya, Chiu, Yu Ying, Jiang, Liwei, Newman, Benjamin, Ravichander, Abhilasha, Chandu, Khyathi, Bras, Ronan Le, Cardie, Claire, Deng, Yuntian, Choi, Yejin
While hallucinations of large language models (LLMs) prevail as a major challenge, existing evaluation benchmarks on factuality do not cover the diverse domains of knowledge that the real-world users of LLMs seek information about. To bridge this gap
Externí odkaz:
http://arxiv.org/abs/2407.17468
Autor:
Brahman, Faeze, Kumar, Sachin, Balachandran, Vidhisha, Dasigi, Pradeep, Pyatkin, Valentina, Ravichander, Abhilasha, Wiegreffe, Sarah, Dziri, Nouha, Chandu, Khyathi, Hessel, Jack, Tsvetkov, Yulia, Smith, Noah A., Choi, Yejin, Hajishirzi, Hannaneh
Chat-based language models are designed to be helpful, yet they should not comply with every user request. While most existing work primarily focuses on refusal of "unsafe" queries, we posit that the scope of noncompliance should be broadened. We int
Externí odkaz:
http://arxiv.org/abs/2407.12043
Autor:
Lin, Bill Yuchen, Deng, Yuntian, Chandu, Khyathi, Brahman, Faeze, Ravichander, Abhilasha, Pyatkin, Valentina, Dziri, Nouha, Bras, Ronan Le, Choi, Yejin
We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs) using challenging, real-world user queries. WildBench consists of 1,024 tasks carefully selected from over one million human-chatbot conversa
Externí odkaz:
http://arxiv.org/abs/2406.04770
Multiple-choice question answering (MCQA) is often used to evaluate large language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if LLMs can perform MCQA with choices-only prompts, where models must select the correct answer only
Externí odkaz:
http://arxiv.org/abs/2402.12483
Autor:
Groeneveld, Dirk, Beltagy, Iz, Walsh, Pete, Bhagia, Akshita, Kinney, Rodney, Tafjord, Oyvind, Jha, Ananya Harsh, Ivison, Hamish, Magnusson, Ian, Wang, Yizhong, Arora, Shane, Atkinson, David, Authur, Russell, Chandu, Khyathi Raghavi, Cohan, Arman, Dumas, Jennifer, Elazar, Yanai, Gu, Yuling, Hessel, Jack, Khot, Tushar, Merrill, William, Morrison, Jacob, Muennighoff, Niklas, Naik, Aakanksha, Nam, Crystal, Peters, Matthew E., Pyatkin, Valentina, Ravichander, Abhilasha, Schwenk, Dustin, Shah, Saurabh, Smith, Will, Strubell, Emma, Subramani, Nishant, Wortsman, Mitchell, Dasigi, Pradeep, Lambert, Nathan, Richardson, Kyle, Zettlemoyer, Luke, Dodge, Jesse, Lo, Kyle, Soldaini, Luca, Smith, Noah A., Hajishirzi, Hannaneh
Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important det
Externí odkaz:
http://arxiv.org/abs/2402.00838