Zobrazeno 1 - 10
of 24
pro vyhledávání: '"KOLT, NOAM"'
Autor:
Chan, Alan, Kolt, Noam, Wills, Peter, Anwar, Usman, de Witt, Christian Schroeder, Rajkumar, Nitarshan, Hammond, Lewis, Krueger, David, Heim, Lennart, Anderljung, Markus
AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. A user may not be able to verify whether a system has certain safety certifications. An investigator may not k
Externí odkaz:
http://arxiv.org/abs/2406.12137
Autor:
Kolt, Noam, Anderljung, Markus, Barnhart, Joslyn, Brass, Asher, Esvelt, Kevin, Hadfield, Gillian K., Heim, Lennart, Rodriguez, Mikel, Sandbrink, Jonas B., Woodside, Thomas
Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical informati
Externí odkaz:
http://arxiv.org/abs/2404.02675
Autor:
Casper, Stephen, Ezell, Carson, Siegmann, Charlotte, Kolt, Noam, Curtis, Taylor Lynn, Bucknall, Benjamin, Haupt, Andreas, Wei, Kevin, Scheurer, Jérémy, Hobbhahn, Marius, Sharkey, Lee, Krishna, Satyapriya, Von Hagen, Marvin, Alberti, Silas, Chan, Alan, Sun, Qinyi, Gerovitch, Michael, Bau, David, Tegmark, Max, Krueger, David, Hadfield-Menell, Dylan
Publikováno v:
The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), June 3-6, 2024, Rio de Janeiro, Brazil
External audits of AI systems are increasingly recognized as a key mechanism for AI governance. The effectiveness of an audit, however, depends on the degree of access granted to auditors. Recent audits of state-of-the-art AI systems have primarily r
Externí odkaz:
http://arxiv.org/abs/2401.14446
Autor:
Chan, Alan, Ezell, Carson, Kaufmann, Max, Wei, Kevin, Hammond, Lewis, Bradley, Herbie, Bluemke, Emma, Rajkumar, Nitarshan, Krueger, David, Kolt, Noam, Heim, Lennart, Anderljung, Markus
Increased delegation of commercial, scientific, governmental, and personal activities to AI agents -- systems capable of pursuing complex goals with limited supervision -- may exacerbate existing societal risks and introduce new risks. Understanding
Externí odkaz:
http://arxiv.org/abs/2401.13138
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Autor:
Guha, Neel, Nyarko, Julian, Ho, Daniel E., Ré, Christopher, Chilton, Adam, Narayana, Aditya, Chohlas-Wood, Alex, Peters, Austin, Waldon, Brandon, Rockmore, Daniel N., Zambrano, Diego, Talisman, Dmitry, Hoque, Enam, Surani, Faiz, Fagan, Frank, Sarfaty, Galit, Dickinson, Gregory M., Porat, Haggai, Hegland, Jason, Wu, Jessica, Nudell, Joe, Niklaus, Joel, Nay, John, Choi, Jonathan H., Tobia, Kevin, Hagan, Margaret, Ma, Megan, Livermore, Michael, Rasumov-Rahe, Nikon, Holzenberger, Nils, Kolt, Noam, Henderson, Peter, Rehaag, Sean, Goel, Sharad, Gao, Shang, Williams, Spencer, Gandhi, Sunny, Zur, Tom, Iyer, Varun, Li, Zehua
The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively co
Externí odkaz:
http://arxiv.org/abs/2308.11462
Autor:
Anderljung, Markus, Barnhart, Joslyn, Korinek, Anton, Leung, Jade, O'Keefe, Cullen, Whittlestone, Jess, Avin, Shahar, Brundage, Miles, Bullock, Justin, Cass-Beggs, Duncan, Chang, Ben, Collins, Tantum, Fist, Tim, Hadfield, Gillian, Hayes, Alan, Ho, Lewis, Hooker, Sara, Horvitz, Eric, Kolt, Noam, Schuett, Jonas, Shavit, Yonadav, Siddarth, Divya, Trager, Robert, Wolf, Kevin
Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess
Externí odkaz:
http://arxiv.org/abs/2307.03718
Autor:
Shevlane, Toby, Farquhar, Sebastian, Garfinkel, Ben, Phuong, Mary, Whittlestone, Jess, Leung, Jade, Kokotajlo, Daniel, Marchal, Nahema, Anderljung, Markus, Kolt, Noam, Ho, Lewis, Siddarth, Divya, Avin, Shahar, Hawkins, Will, Kim, Been, Gabriel, Iason, Bolina, Vijay, Clark, Jack, Bengio, Yoshua, Christiano, Paul, Dafoe, Allan
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabiliti
Externí odkaz:
http://arxiv.org/abs/2305.15324
Autor:
Cohen, Michael K.1,2 mkcohen@berkeley.edu, Kolt, Noam3,4, Bengio, Yoshua5,6, Hadfield, Gillian K.2,3,7,8, Russell, Stuart1,2
Publikováno v:
Science. 4/5/2024, Vol. 384 Issue 6691, p36-38. 3p.
Autor:
Kolt, Noam
Publikováno v:
Yale Law & Policy Review, 2019 Oct 01. 38(1), 77-149.
Externí odkaz:
https://www.jstor.org/stable/45284528
Autor:
KOLT, NOAM
Publikováno v:
Washington University Law Review; 2024, Vol. 101 Issue 4, p1177-1240, 64p