Zobrazeno 1 - 10
of 11 687
pro vyhledávání: '"Carlini, A A"'
Autor:
Qi, Xiangyu, Wei, Boyi, Carlini, Nicholas, Huang, Yangsibo, Xie, Tinghao, He, Luxi, Jagielski, Matthew, Nasr, Milad, Mittal, Prateek, Henderson, Peter
Stakeholders -- from model developers to policymakers -- seek to minimize the dual-use risks of large language models (LLMs). An open challenge to this goal is whether technical safeguards can impede the misuse of LLMs, even when models are customiza
Externí odkaz:
http://arxiv.org/abs/2412.07097
Autor:
Zhao, Xuandong, Gunn, Sam, Christ, Miranda, Fairoze, Jaiden, Fabrega, Andres, Carlini, Nicholas, Garg, Sanjam, Hong, Sanghyun, Nasr, Milad, Tramer, Florian, Jha, Somesh, Li, Lei, Wang, Yu-Xiang, Song, Dawn
As the outputs of generative AI (GenAI) techniques improve in quality, it becomes increasingly challenging to distinguish them from human-created content. Watermarking schemes are a promising approach to address the problem of distinguishing between
Externí odkaz:
http://arxiv.org/abs/2411.18479
Ensemble everything everywhere is a defense to adversarial examples that was recently proposed to make image classifiers robust. This defense works by ensembling a model's intermediate representations at multiple noisy image resolutions, producing a
Externí odkaz:
http://arxiv.org/abs/2411.14834
Autor:
Aerni, Michael, Rando, Javier, Debenedetti, Edoardo, Carlini, Nicholas, Ippolito, Daphne, Tramèr, Florian
Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of
Externí odkaz:
http://arxiv.org/abs/2411.10242
We examine the numerical approximation of time-dependent Hamilton-Jacobi equations on networks, providing a convergence error estimate for the semi-Lagrangian scheme introduced in (Carlini and Siconolfi, 2023), where convergence was proven without an
Externí odkaz:
http://arxiv.org/abs/2411.02356
Mixture-of-Experts (MoE) models improve the efficiency and scalability of dense language models by routing each token to a small number of experts in each layer. In this paper, we show how an adversary that can arrange for their queries to appear in
Externí odkaz:
http://arxiv.org/abs/2410.22884
Autor:
Carlini, Nicholas, Nasr, Milad
Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or parallel decoding) that improves the (average case) efficienc
Externí odkaz:
http://arxiv.org/abs/2410.17175
Autor:
Zhang, Yiming, Rando, Javier, Evtimov, Ivan, Chi, Jianfeng, Smith, Eric Michael, Carlini, Nicholas, Tramèr, Florian, Ippolito, Daphne
Large language models are pre-trained on uncurated text datasets consisting of trillions of tokens scraped from the Web. Prior work has shown that: (1) web-scraped pre-training datasets can be practically poisoned by malicious actors; and (2) adversa
Externí odkaz:
http://arxiv.org/abs/2410.13722
Autor:
Carlini, Nicholas, Chávez-Saab, Jorge, Hambitzer, Anna, Rodríguez-Henríquez, Francisco, Shamir, Adi
Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns about parameter extraction by malicious actors. Recent work by Carlini et al. (crypto'20) and Canales-Mart\'inez et al. (eurocrypt'24) has drawn
Externí odkaz:
http://arxiv.org/abs/2410.05750
Autor:
Zeinoddin, Mona Sheikh, Lena, Chiara, Qu, Jiongqi, Carlini, Luca, Magro, Mattia, Kim, Seunghoi, De Momi, Elena, Bano, Sophia, Grech-Sollars, Matthew, Mazomenos, Evangelos, Alexander, Daniel C., Stoyanov, Danail, Clarkson, Matthew J., Islam, Mobarakol
Robotic-assisted surgery (RAS) relies on accurate depth estimation for 3D reconstruction and visualization. While foundation models like Depth Anything Models (DAM) show promise, directly applying them to surgery often yields suboptimal results. Full
Externí odkaz:
http://arxiv.org/abs/2408.17433