Zobrazeno 1 - 10
of 33
pro vyhledávání: '"Shilov, Igor"'
Membership inference attacks (MIAs) are widely used to empirically assess the privacy risks of samples used to train a target machine learning model. State-of-the-art methods however require training hundreds of shadow models, with the same size and
Externí odkaz:
http://arxiv.org/abs/2411.05743
Autor:
Meeus, Matthieu, Shilov, Igor, Jain, Shubham, Faysse, Manuel, Rei, Marek, de Montjoye, Yves-Alexandre
Whether LLMs memorize their training data and what this means, from privacy leakage to detecting copyright violations -- has become a rapidly growing area of research over the last two years. In recent months, more than 10 new methods have been propo
Externí odkaz:
http://arxiv.org/abs/2406.17975
Autor:
Wicker, Matthew, Sosnin, Philip, Shilov, Igor, Janik, Adrianna, Müller, Mark N., de Montjoye, Yves-Alexandre, Weller, Adrian, Tsay, Calvin
Differential privacy upper-bounds the information leakage of machine learning models, yet providing meaningful privacy guarantees has proven to be challenging in practice. The private prediction setting where model outputs are privatized is being inv
Externí odkaz:
http://arxiv.org/abs/2406.13433
The immense datasets used to develop Large Language Models (LLMs) often include copyright-protected content, typically without the content creator's consent. Copyright traps have been proposed to be injected into the original content, improving conte
Externí odkaz:
http://arxiv.org/abs/2405.15523
Questions of fair use of copyright-protected content to train Large Language Models (LLMs) are being actively debated. Document-level inference has been proposed as a new task: inferring from black-box access to the trained model whether a piece of c
Externí odkaz:
http://arxiv.org/abs/2402.09363
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model. It has been recently shown that simple heuristics can reconstruct data samples from language models, making this threat sce
Externí odkaz:
http://arxiv.org/abs/2202.07623
Autor:
Yousefpour, Ashkan, Shilov, Igor, Sablayrolles, Alexandre, Testuggine, Davide, Prasad, Karthik, Malek, Mani, Nguyen, John, Ghosh, Sayan, Bharadwaj, Akash, Zhao, Jessica, Cormode, Graham, Mironov, Ilya
We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus.ai). Opacus is designed for simplicity, flexibility, and speed. It provides a simple and user-friendly API, and ena
Externí odkaz:
http://arxiv.org/abs/2109.12298
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP) with respect to the labels of the training examples. We propose two novel approaches based on, respectively, the Laplace m
Externí odkaz:
http://arxiv.org/abs/2106.03408
Autor:
Shilov, Igor O.1, Staritzyn, Dmitry K.1
Publikováno v:
Proceedings of the International Multidisciplinary Scientific GeoConference SGEM. 2011, Vol. 2, p1003-1009. 7p.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.