Zobrazeno 1 - 10
of 95
pro vyhledávání: '"Jernite, Yacine"'
Autor:
Don-Yehiya, Shachar, Burtenshaw, Ben, Astudillo, Ramon Fernandez, Osborne, Cailean, Jaiswal, Mimansa, Kuo, Tzu-Sheng, Zhao, Wenting, Shenfeld, Idan, Peng, Andi, Yurochkin, Mikhail, Kasirzadeh, Atoosa, Huang, Yangsibo, Hashimoto, Tatsunori, Jernite, Yacine, Vila-Suero, Daniel, Abend, Omri, Ding, Jennifer, Hooker, Sara, Kirk, Hannah Rose, Choshen, Leshem
Human feedback on conversations with language language models (LLMs) is central to how these systems learn about the world, improve their capabilities, and are steered toward desirable and safe behaviors. However, this feedback is mostly collected by
Externí odkaz:
http://arxiv.org/abs/2408.16961
Autor:
Longpre, Shayne, Biderman, Stella, Albalak, Alon, Schoelkopf, Hailey, McDuff, Daniel, Kapoor, Sayash, Klyman, Kevin, Lo, Kyle, Ilharco, Gabriel, San, Nay, Rauh, Maribeth, Skowron, Aviya, Vidgen, Bertie, Weidinger, Laura, Narayanan, Arvind, Sanh, Victor, Adelani, David, Liang, Percy, Bommasani, Rishi, Henderson, Peter, Luccioni, Sasha, Jernite, Yacine, Soldaini, Luca
Foundation model development attracts a rapidly expanding body of contributors, scientists, and applications. To help shape responsible development practices, we introduce the Foundation Model Development Cheatsheet: a growing collection of 250+ tool
Externí odkaz:
http://arxiv.org/abs/2406.16746
Autor:
Pistilli, Giada, Leidinger, Alina, Jernite, Yacine, Kasirzadeh, Atoosa, Luccioni, Alexandra Sasha, Mitchell, Margaret
This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal impacts" dataset, designed to evaluate the social and cultural variation of Large Language Models (LLMs) across multiple languages and value-sensitive topic
Externí odkaz:
http://arxiv.org/abs/2405.13974
Autor:
Lozhkov, Anton, Li, Raymond, Allal, Loubna Ben, Cassano, Federico, Lamy-Poirier, Joel, Tazi, Nouamane, Tang, Ao, Pykhtar, Dmytro, Liu, Jiawei, Wei, Yuxiang, Liu, Tianyang, Tian, Max, Kocetkov, Denis, Zucker, Arthur, Belkada, Younes, Wang, Zijian, Liu, Qian, Abulkhanov, Dmitry, Paul, Indraneil, Li, Zhuang, Li, Wen-Ding, Risdal, Megan, Li, Jia, Zhu, Jian, Zhuo, Terry Yue, Zheltonozhskii, Evgenii, Dade, Nii Osae Osae, Yu, Wenhao, Krauß, Lucas, Jain, Naman, Su, Yixuan, He, Xuanli, Dey, Manan, Abati, Edoardo, Chai, Yekun, Muennighoff, Niklas, Tang, Xiangru, Oblokulov, Muhtasham, Akiki, Christopher, Marone, Marc, Mou, Chenghao, Mishra, Mayank, Gu, Alex, Hui, Binyuan, Dao, Tri, Zebaze, Armel, Dehaene, Olivier, Patry, Nicolas, Xu, Canwen, McAuley, Julian, Hu, Han, Scholak, Torsten, Paquet, Sebastien, Robinson, Jennifer, Anderson, Carolyn Jane, Chapados, Nicolas, Patwary, Mostofa, Tajbakhsh, Nima, Jernite, Yacine, Ferrandis, Carlos Muñoz, Zhang, Lingming, Hughes, Sean, Wolf, Thomas, Guha, Arjun, von Werra, Leandro, de Vries, Harm
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digita
Externí odkaz:
http://arxiv.org/abs/2402.19173
Autor:
Kapoor, Sayash, Bommasani, Rishi, Klyman, Kevin, Longpre, Shayne, Ramaswami, Ashwin, Cihon, Peter, Hopkins, Aspen, Bankston, Kevin, Biderman, Stella, Bogen, Miranda, Chowdhury, Rumman, Engler, Alex, Henderson, Peter, Jernite, Yacine, Lazar, Seth, Maffulli, Stefano, Nelson, Alondra, Pineau, Joelle, Skowron, Aviya, Song, Dawn, Storchan, Victor, Zhang, Daniel, Ho, Daniel E., Liang, Percy, Narayanan, Arvind
Foundation models are powerful technologies: how they are released publicly directly shapes their societal impact. In this position paper, we focus on open foundation models, defined here as those with broadly available model weights (e.g. Llama 2, S
Externí odkaz:
http://arxiv.org/abs/2403.07918
Autor:
McDuff, Daniel, Korjakow, Tim, Cambo, Scott, Benjamin, Jesse Josua, Lee, Jenny, Jernite, Yacine, Ferrandis, Carlos Muñoz, Gokaslan, Aaron, Tarkowski, Alek, Lindley, Joseph, Cooper, A. Feder, Contractor, Danish
Growing concerns over negligent or malicious uses of AI have increased the appetite for tools that help manage the risks of the technology. In 2018, licenses with behaviorial-use clauses (commonly referred to as Responsible AI Licenses) were proposed
Externí odkaz:
http://arxiv.org/abs/2402.05979
Autor:
BigCode collaboration, Hughes, Sean, de Vries, Harm, Robinson, Jennifer, Ferrandis, Carlos Muñoz, Allal, Loubna Ben, von Werra, Leandro, Ding, Jennifer, Paquet, Sebastien, Jernite, Yacine
This document serves as an overview of the different mechanisms and areas of governance in the BigCode project. It aims to support transparency by providing relevant information about choices that were made during the project to the broader public, a
Externí odkaz:
http://arxiv.org/abs/2312.03872
Publikováno v:
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT '24), June 3--6, 2024, Rio de Janeiro, Brazil
Recent years have seen a surge in the popularity of commercial AI products based on generative, multi-purpose AI systems promising a unified approach to building machine learning (ML) models into technology. However, this ambition of ``generality'' c
Externí odkaz:
http://arxiv.org/abs/2311.16863
Autor:
Solaiman, Irene, Talat, Zeerak, Agnew, William, Ahmad, Lama, Baker, Dylan, Blodgett, Su Lin, Chen, Canyu, Daumé III, Hal, Dodge, Jesse, Duan, Isabella, Evans, Ellie, Friedrich, Felix, Ghosh, Avijit, Gohar, Usman, Hooker, Sara, Jernite, Yacine, Kalluri, Ria, Lusoli, Alberto, Leidinger, Alina, Lin, Michelle, Lin, Xiuzhu, Luccioni, Sasha, Mickel, Jennifer, Mitchell, Margaret, Newman, Jessica, Ovalle, Anaelia, Png, Marie-Therese, Singh, Shubham, Strait, Andrew, Struppek, Lukas, Subramonian, Arjun
Generative AI systems across modalities, ranging from text (including code), image, audio, and video, have broad social impacts, but there is no official standard for means of evaluating those impacts or for which impacts should be evaluated. In this
Externí odkaz:
http://arxiv.org/abs/2306.05949
The growing need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science. While these fields are often considered in isolation, they rely on complementary
Externí odkaz:
http://arxiv.org/abs/2305.18615