Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Zavalny, Alex"'
Autor:
Duan, Jinhao, Cheng, Hao, Wang, Shiqi, Zavalny, Alex, Wang, Chenan, Xu, Renjing, Kailkhura, Bhavya, Xu, Kaidi
Large Language Models (LLMs) show promising results in language generation and instruction following but frequently "hallucinate", making their outputs less reliable. Despite Uncertainty Quantification's (UQ) potential solutions, implementing it accu
Externí odkaz:
http://arxiv.org/abs/2307.01379
Autor:
Tripathi, Satvik, Augustin, Alisha Isabelle, Dunlop, Adam, Sukumaran, Rithvik, Dheer, Suhani, Zavalny, Alex, Haslam, Owen, Austin, Thomas, Donchez, Jacob, Tripathi, Pushpendra Kumar, Kim, Edward
Publikováno v:
In Artificial Intelligence in the Life Sciences December 2022 2
Autor:
Tripathi, Satvik, Moyer, Ethan Jacob, Augustin, Alisha Isabelle, Zavalny, Alex, Dheer, Suhani, Sukumaran, Rithvik, Schwartz, Daniel, Gorski, Brandon, Dako, Farouk, Kim, Edward
Publikováno v:
In Informatics in Medicine Unlocked 2022 33
Autor:
Duan, Jinhao, Cheng, Hao, Wang, Shiqi, Wang, Chenan, Zavalny, Alex, Xu, Renjing, Kailkhura, Bhavya, Xu, Kaidi
Although Large Language Models (LLMs) have shown great potential in Natural Language Generation, it is still challenging to characterize the uncertainty of model generations, i.e., when users could trust model outputs. Our research is derived from th
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e9189889e1821c0e9d593d3b7dfd3cbb
http://arxiv.org/abs/2307.01379
http://arxiv.org/abs/2307.01379