Clio: Privacy-Preserving Insights into Real-World AI Use

Autor: Tamkin, Alex, McCain, Miles, Handa, Kunal, Durmus, Esin, Lovitt, Liane, Rathi, Ankur, Huang, Saffron, Mountfield, Alfred, Hong, Jerry, Ritchie, Stuart, Stern, Michael, Clarke, Brian, Goldberg, Landon, Sumers, Theodore R., Mueller, Jared, McEachen, William, Mitchell, Wes, Carter, Shan, Clark, Jack, Kaplan, Jared, Ganguli, Deep
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: How are AI assistants being used in the real world? While model providers in theory have a window into this impact via their users' data, both privacy concerns and practical challenges have made analyzing this data difficult. To address these issues, we present Clio (Claude insights and observations), a privacy-preserving platform that uses AI assistants themselves to analyze and surface aggregated usage patterns across millions of conversations, without the need for human reviewers to read raw conversations. We validate this can be done with a high degree of accuracy and privacy by conducting extensive evaluations. We demonstrate Clio's usefulness in two broad ways. First, we share insights about how models are being used in the real world from one million Claude.ai Free and Pro conversations, ranging from providing advice on hairstyles to providing guidance on Git operations and concepts. We also identify the most common high-level use cases on Claude.ai (coding, writing, and research tasks) as well as patterns that differ across languages (e.g., conversations in Japanese discuss elder care and aging populations at higher-than-typical rates). Second, we use Clio to make our systems safer by identifying coordinated attempts to abuse our systems, monitoring for unknown unknowns during critical periods like launches of new capabilities or major world events, and improving our existing monitoring systems. We also discuss the limitations of our approach, as well as risks and ethical concerns. By enabling analysis of real-world AI usage, Clio provides a scalable platform for empirically grounded AI safety and governance.
Databáze: arXiv