Zobrazeno 1 - 10
of 1 221
pro vyhledávání: '"Bianchi, Federico"'
Autor:
Suzgun, Mirac, Gur, Tayfun, Bianchi, Federico, Ho, Daniel E., Icard, Thomas, Jurafsky, Dan, Zou, James
As language models (LMs) become integral to fields like healthcare, law, and journalism, their ability to differentiate between fact, belief, and knowledge is essential for reliable decision-making. Failure to grasp these distinctions can lead to sig
Externí odkaz:
http://arxiv.org/abs/2410.21195
Autor:
Doumbouya, Moussa Koulako Bala, Nandi, Ananjan, Poesia, Gabriel, Ghilardi, Davide, Goldie, Anna, Bianchi, Federico, Jurafsky, Dan, Manning, Christopher D.
The safety of Large Language Models (LLMs) remains a critical concern due to a lack of adequate benchmarks for systematically evaluating their ability to resist generating harmful content. Previous efforts towards automated red teaming involve static
Externí odkaz:
http://arxiv.org/abs/2408.04811
Autor:
Yuksekgonul, Mert, Bianchi, Federico, Boen, Joseph, Liu, Sheng, Huang, Zhi, Guestrin, Carlos, Zou, James
AI is undergoing a paradigm shift, with breakthroughs achieved by systems orchestrating multiple large language models (LLMs) and other complex components. As a result, developing principled and automated optimization methods for compound AI systems
Externí odkaz:
http://arxiv.org/abs/2406.07496
Autor:
Bianchi, Federico, Zou, James
The risks derived from large language models (LLMs) generating deceptive and damaging content have been the subject of considerable research, but even safe generations can lead to problematic downstream impacts. In our study, we shift the focus to ho
Externí odkaz:
http://arxiv.org/abs/2402.13926
Autor:
Bianchi, Federico, Chia, Patrick John, Yuksekgonul, Mert, Tagliabue, Jacopo, Jurafsky, Dan, Zou, James
Negotiation is the basis of social interactions; humans negotiate everything from the price of cars to how to share common resources. With rapidly growing interest in using large language models (LLMs) to act as agents on behalf of human users, such
Externí odkaz:
http://arxiv.org/abs/2402.05863
The rapid and massive diffusion of electric vehicles poses new challenges to the electric system, which must be able to supply these new loads, but at the same time opens up new opportunities thanks to the possible provision of ancillary services. In
Externí odkaz:
http://arxiv.org/abs/2309.11118
Autor:
Bianchi, Federico, Suzgun, Mirac, Attanasio, Giuseppe, Röttger, Paul, Jurafsky, Dan, Hashimoto, Tatsunori, Zou, James
Training large language models to follow instructions makes them perform better on a wide range of tasks and generally become more helpful. However, a perfectly helpful model will follow even the most malicious instructions and readily generate harmf
Externí odkaz:
http://arxiv.org/abs/2309.07875
Autor:
Röttger, Paul, Kirk, Hannah Rose, Vidgen, Bertie, Attanasio, Giuseppe, Bianchi, Federico, Hovy, Dirk
Without proper safeguards, large language models will readily follow malicious instructions and generate toxic content. This risk motivates safety efforts such as red-teaming and large-scale feedback learning, which aim to make models both helpful an
Externí odkaz:
http://arxiv.org/abs/2308.01263
Autor:
Chia, Patrick John, Attanasio, Giuseppe, Tagliabue, Jacopo, Bianchi, Federico, Greco, Ciro, Moreira, Gabriel de Souza P., Eynard, Davide, Husain, Fahd
Recommender Systems today are still mostly evaluated in terms of accuracy, with other aspects beyond the immediate relevance of recommendations, such as diversity, long-term user retention and fairness, often taking a back seat. Moreover, reconciling
Externí odkaz:
http://arxiv.org/abs/2304.10621
Autor:
Bianchi, Federico, Chia, Patrick John, Greco, Ciro, Pomo, Claudio, Moreira, Gabriel, Eynard, Davide, Husain, Fahd, Tagliabue, Jacopo
EvalRS aims to bring together practitioners from industry and academia to foster a debate on rounded evaluation of recommender systems, with a focus on real-world impact across a multitude of deployment scenarios. Recommender systems are often evalua
Externí odkaz:
http://arxiv.org/abs/2304.07145