Zobrazeno 1 - 10
of 159
pro vyhledávání: '"Abadi, Martin"'
Autor:
Abadi, Martin, Plotkin, Gordon
Publikováno v:
Logical Methods in Computer Science, Volume 19, Issue 2 (April 20, 2023) lmcs:8372
Describing systems in terms of choices and their resulting costs and rewards offers the promise of freeing algorithm designers and programmers from specifying how those choices should be made; in implementations, the choices can be realized by optimi
Externí odkaz:
http://arxiv.org/abs/2007.08926
Autor:
Abadi, Martin, Plotkin, Gordon D.
Automatic differentiation plays a prominent role in scientific computing and in modern machine learning, often in the context of powerful programming systems. The relation of the various embodiments of automatic differentiation to the mathematical no
Externí odkaz:
http://arxiv.org/abs/1911.04523
Autor:
Yu, Yuan, Abadi, Martín, Barham, Paul, Brevdo, Eugene, Burrows, Mike, Davis, Andy, Dean, Jeff, Ghemawat, Sanjay, Harley, Tim, Hawkins, Peter, Isard, Michael, Kudlur, Manjunath, Monga, Rajat, Murray, Derek, Zheng, Xiaoqiang
Publikováno v:
EuroSys 2018: Thirteenth EuroSys Conference, April 23-26, 2018, Porto, Portugal. ACM, New York, NY, USA
Many recent machine learning models rely on fine-grained dynamic control flow for training and inference. In particular, models based on recurrent neural networks and on reinforcement learning depend on recurrence relations, data-dependent conditiona
Externí odkaz:
http://arxiv.org/abs/1805.01772
We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targe
Externí odkaz:
http://arxiv.org/abs/1712.09665
Autor:
Abadi, Martín, Erlingsson, Úlfar, Goodfellow, Ian, McMahan, H. Brendan, Mironov, Ilya, Papernot, Nicolas, Talwar, Kunal, Zhang, Li
Publikováno v:
IEEE 30th Computer Security Foundations Symposium (CSF), pages 1--6, 2017
The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and usef
Externí odkaz:
http://arxiv.org/abs/1708.08022
Learning a natural language interface for database tables is a challenging task that involves deep language understanding and multi-step reasoning. The task is often approached by mapping natural language queries to logical forms or programs that pro
Externí odkaz:
http://arxiv.org/abs/1611.08945
Autor:
Abadi, Martín, Andersen, David G.
We ask whether neural networks can learn to use secret keys to protect information from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an advers
Externí odkaz:
http://arxiv.org/abs/1610.06918
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may the
Externí odkaz:
http://arxiv.org/abs/1610.05755
We study the interaction of the programming construct "new", which generates statically scoped names, with communication via messages on channels. This interaction is crucial in security protocols, which are the main motivating examples for our work,
Externí odkaz:
http://arxiv.org/abs/1609.03003
Autor:
Abadi, Martín, Chu, Andy, Goodfellow, Ian, McMahan, H. Brendan, Mironov, Ilya, Talwar, Kunal, Zhang, Li
Publikováno v:
Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (ACM CCS), pp. 308-318, 2016
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. Th
Externí odkaz:
http://arxiv.org/abs/1607.00133