Zobrazeno 1 - 10
of 51
pro vyhledávání: '"Borovykh, Anastasia"'
Personalized learning is a proposed approach to address the problem of data heterogeneity in collaborative machine learning. In a decentralized setting, the two main challenges of personalization are client clustering and data privacy. In this paper,
Externí odkaz:
http://arxiv.org/abs/2405.17697
Adversaries have been targeting unique identifiers to launch typo-squatting, mobile app squatting and even voice squatting attacks. Anecdotal evidence suggest that online social networks (OSNs) are also plagued with accounts that use similar username
Externí odkaz:
http://arxiv.org/abs/2401.09209
Autor:
Gu, Boyang, Borovykh, Anastasia
We study whether inputs from the same class can be connected by a continuous path, in original or latent representation space, such that all points on the path are mapped by the neural network model to the same class. Understanding how the neural net
Externí odkaz:
http://arxiv.org/abs/2311.06816
We introduce an analytical framework to quantify the changes in a machine learning algorithm's output distribution following the inclusion of a few data points in its training set, a notion we define as leave-one-out distinguishability (LOOD). This i
Externí odkaz:
http://arxiv.org/abs/2309.17310
The privacy preserving properties of Langevin dynamics with additive isotropic noise have been extensively studied. However, the isotropic noise assumption is very restrictive: (a) when adding noise to existing learning algorithms to preserve privacy
Externí odkaz:
http://arxiv.org/abs/2302.00766
A deep learning approach for the approximation of the Hamilton-Jacobi-Bellman partial differential equation (HJB PDE) associated to the Nonlinear Quadratic Regulator (NLQR) problem. A state-dependent Riccati equation control law is first used to gene
Externí odkaz:
http://arxiv.org/abs/2207.09299
The mirror descent algorithm is known to be effective in situations where it is beneficial to adapt the mirror map to the underlying geometry of the optimization model. However, the effect of mirror maps on the geometry of distributed optimization pr
Externí odkaz:
http://arxiv.org/abs/2201.08642
Autor:
Mo, Fan, Borovykh, Anastasia, Malekzadeh, Mohammad, Demetriou, Soteris, Gündüz, Deniz, Haddadi, Hamed
In collaborative learning, clients keep their data private and communicate only the computed gradients of the deep neural network being trained on their local data. Several recent attacks show that one can still extract private information from the s
Externí odkaz:
http://arxiv.org/abs/2105.13929
It is known that deep neural networks, trained for the classification of non-sensitive target attributes, can reveal sensitive attributes of their input data through internal representations extracted by the classifier. We take a step forward and sho
Externí odkaz:
http://arxiv.org/abs/2105.12049
Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data. Prior work has demonstrated that in practice a client's private information, unrelated to the main learn
Externí odkaz:
http://arxiv.org/abs/2010.08762