Zobrazeno 1 - 10
of 1 092
pro vyhledávání: '"Seeland, A."'
Gradient inversion attacks are an ubiquitous threat in federated learning as they exploit gradient leakage to reconstruct supposedly private training data. Recent work has proposed to prevent gradient leakage without loss of model utility by incorpor
Externí odkaz:
http://arxiv.org/abs/2309.04515
Autor:
Koch, Timo, Gläser, Dennis, Seeland, Anett, Roy, Sarbani, Schulze, Katharina, Weishaupt, Kilian, Boehringer, David, Hermann, Sibylle, Flemisch, Bernd
Research software is an integral part of most research today and it is widely accepted that research software artifacts should be accessible and reproducible. However, the sustainable archival of research software artifacts is an ongoing effort. We i
Externí odkaz:
http://arxiv.org/abs/2301.12830
Publikováno v:
Applied Artificial Intelligence, Vol 38, Iss 1 (2024)
Federated Learning (FL) allows multiple clients to train a common model without sharing their private training data. In practice, federated optimization struggles with sub-optimal model utility because data is not independent and identically distribu
Externí odkaz:
https://doaj.org/article/09c8d0aa6936474bb77dbd525abbaac5
Gradient inversion attacks on federated learning systems reconstruct client training data from exchanged gradient information. To defend against such attacks, a variety of defense mechanisms were proposed. However, they usually lead to an unacceptabl
Externí odkaz:
http://arxiv.org/abs/2208.06163
Publikováno v:
Journal of Neural Engineering 14 2 (2017) 025003
Objective: Classifier transfers usually come with dataset shifts. To overcome them, online strategies have to be applied. For practical applications, limitations in the computational resources for the adaptation of batch learning algorithms, like the
Externí odkaz:
http://arxiv.org/abs/2208.05112
Gradient Inversion (GI) attacks are a ubiquitous threat in Federated Learning (FL) as they exploit gradient leakage to reconstruct supposedly private training data. Common defense mechanisms such as Differential Privacy (DP) or stochastic Privacy Mod
Externí odkaz:
http://arxiv.org/abs/2208.04767
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients. Although training data entirely resides with the clients, recent work shows that training data can be reconstructed fro
Externí odkaz:
http://arxiv.org/abs/2108.04725
Autor:
Seeland, Gianna R., Williams, Brinley M., Yadav, Menaka, Bowden, Emily, Antoniewicz, Leah W., Kilpatrick, Charlie C., Mastrobattista, Joan M., Ratan, Bani M.
Publikováno v:
In Journal of Surgical Education March 2024 81(3):397-403
Autor:
Sophia Sgraja, Judith Mollenhauer, Martina Kloepfer, Ute Seeland, Clarissa Kurscheid, Volker Amelung
Publikováno v:
PLoS ONE, Vol 19, Iss 4, p e0301732 (2024)
BackgroundA growing body of evidence has demonstrated that a gender-sensitive approach to healthcare is needed in all areas of medicine. Although medical and nursing guidelines include gender-sensitive care (GSC+) recommendations, the level of implem
Externí odkaz:
https://doaj.org/article/5f4374b15e1d460a925eda42fccb8333
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.