Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Reuel, Anka"'
Autor:
Reuel, Anka, Connolly, Patrick, Meimandi, Kiana Jafari, Tewari, Shekhar, Wiatrak, Jakub, Venkatesh, Dikshita, Kochenderfer, Mykel
Responsible AI (RAI) has emerged as a major focus across industry, policymaking, and academia, aiming to mitigate the risks and maximize the benefits of AI, both on an organizational and societal level. This study explores the global state of RAI thr
Externí odkaz:
http://arxiv.org/abs/2410.09985
Autor:
Reuel, Anka, Bucknall, Ben, Casper, Stephen, Fist, Tim, Soder, Lisa, Aarne, Onni, Hammond, Lewis, Ibrahim, Lujain, Chan, Alan, Wills, Peter, Anderljung, Markus, Garfinkel, Ben, Heim, Lennart, Trask, Andrew, Mukobi, Gabriel, Schaeffer, Rylan, Baker, Mauricio, Hooker, Sara, Solaiman, Irene, Luccioni, Alexandra Sasha, Rajkumar, Nitarshan, Moës, Nicolas, Ladish, Jeffrey, Guha, Neel, Newman, Jessica, Bengio, Yoshua, South, Tobin, Pentland, Alex, Koyejo, Sanmi, Kochenderfer, Mykel J., Trager, Robert
AI progress is creating a growing range of risks and opportunities, but it is often unclear how they should be navigated. In many cases, the barriers and uncertainties faced are at least partly technical. Technical AI governance, referring to technic
Externí odkaz:
http://arxiv.org/abs/2407.14981
In light of recent advancements in AI capabilities and the increasingly widespread integration of AI systems into society, governments worldwide are actively seeking to mitigate the potential harms and risks associated with these technologies through
Externí odkaz:
http://arxiv.org/abs/2406.06987
Autor:
Reuel, Anka, Undheim, Trond Arne
Because of the speed of its development, broad scope of application, and its ability to augment human performance, generative AI challenges the very notions of governance, trust, and human agency. The technology's capacity to mimic human knowledge wo
Externí odkaz:
http://arxiv.org/abs/2406.04554
Autor:
Maslej, Nestor, Fattorini, Loredana, Perrault, Raymond, Parli, Vanessa, Reuel, Anka, Brynjolfsson, Erik, Etchemendy, John, Ligett, Katrina, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Shoham, Yoav, Wald, Russell, Clark, Jack
The 2024 Index is our most comprehensive to date and arrives at an important moment when AI's influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical adv
Externí odkaz:
http://arxiv.org/abs/2405.19522
Autor:
Reuel, Anka, Ma, Devin
While our understanding of fairness in machine learning has significantly progressed, our understanding of fairness in reinforcement learning (RL) remains nascent. Most of the attention has been on fairness in one-shot classification tasks; however,
Externí odkaz:
http://arxiv.org/abs/2405.06909
Autor:
Rivera, Juan-Pablo, Mukobi, Gabriel, Reuel, Anka, Lamparth, Max, Smith, Chandler, Schneider, Jacquelyn
Publikováno v:
The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT 24), June 3-6, 2024, Rio de Janeiro, Brazil
Governments are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making, especially with the emergence of advanced generative AI models like GPT-4. Our work aims to scrutinize the behavior
Externí odkaz:
http://arxiv.org/abs/2401.03408
Autor:
Trager, Robert, Harack, Ben, Reuel, Anka, Carnegie, Allison, Heim, Lennart, Ho, Lewis, Kreps, Sarah, Lall, Ranjit, Larter, Owen, hÉigeartaigh, Seán Ó, Staffell, Simon, Villalobos, José Jaime
This report describes trade-offs in the design of international governance arrangements for civilian artificial intelligence (AI) and presents one approach in detail. This approach represents the extension of a standards, licensing, and liability reg
Externí odkaz:
http://arxiv.org/abs/2308.15514
Organizations that develop and deploy artificial intelligence (AI) systems need to take measures to reduce the associated risks. In this paper, we examine how AI companies could design an AI ethics board in a way that reduces risks from AI. We identi
Externí odkaz:
http://arxiv.org/abs/2304.07249
Autor:
Lamparth, Max, Reuel, Anka
Publikováno v:
The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT 24), June 3-6, 2024, Rio de Janeiro, Brazil
Poisoning of data sets is a potential security threat to large language models that can lead to backdoored models. A description of the internal mechanisms of backdoored language models and how they process trigger inputs, e.g., when switching to tox
Externí odkaz:
http://arxiv.org/abs/2302.12461