Zobrazeno 1 - 10
of 543
pro vyhledávání: '"Scheliga A"'
Gradient inversion attacks are an ubiquitous threat in federated learning as they exploit gradient leakage to reconstruct supposedly private training data. Recent work has proposed to prevent gradient leakage without loss of model utility by incorpor
Externí odkaz:
http://arxiv.org/abs/2309.04515
Autor:
Sebastian Scheliga, Mara Derissen, Knut Kröger, Rainer Röhrig, Lea Schomacher, Hannah Schick, Rainer Beckers, Hinrich Böhner, Ute Habel
Publikováno v:
BMC Public Health, Vol 24, Iss 1, Pp 1-9 (2024)
Abstract Background Smoking is a major risk factor of cardiovascular diseases, notably peripheral arterial disease (PAD). Despite this link, research on smoking cessation interventions in PAD patients remains scarce and inconclusive regarding the eff
Externí odkaz:
https://doaj.org/article/63b4eeaf8a314b398bbddf59e0ccfb06
Autor:
Eva Scheliga
Publikováno v:
Campos, Vol 24, Iss 1-2, Pp 288-291 (2024)
Revisões críticas são peças fundamentais para a divulgação acadêmica: este material nos situa nas movimentações dos debates de nosso campo de conhecimento, incitando ao diálogo com nossos pares e à ampliação de nossos referenciais. Nesta
Externí odkaz:
https://doaj.org/article/3e636ce587074109b91235fb5d66ac78
Autor:
V Pfister, FM Marques, R Santucci, V Buccheri, G Ribeiro, VLP Figueiredo, N Hamerschlak, A Costa, T Silveira, A Scheliga, L Perobelli, CS Chiattone, C Arrais-Rodrigues
Publikováno v:
Hematology, Transfusion and Cell Therapy, Vol 46, Iss , Pp S327-S328 (2024)
Introduction: Ibrutinib, a Bruton's tyrosine kinase (BTK) inhibitor, is associated with increased survival and prolonged response, even in high-risk scenarios, with relatively low toxicity compared to chemoimmunotherapy. However, a significant number
Externí odkaz:
https://doaj.org/article/ed19c137f5f24762bd10e68c250f08cc
Gradient inversion attacks on federated learning systems reconstruct client training data from exchanged gradient information. To defend against such attacks, a variety of defense mechanisms were proposed. However, they usually lead to an unacceptabl
Externí odkaz:
http://arxiv.org/abs/2208.06163
Exploiting gradient leakage to reconstruct supposedly private training data, gradient inversion attacks are an ubiquitous threat in collaborative learning of neural networks. To prevent gradient leakage without suffering from severe loss in model per
Externí odkaz:
http://arxiv.org/abs/2208.04767
Publikováno v:
Applied Artificial Intelligence, Vol 38, Iss 1 (2024)
Federated Learning (FL) allows multiple clients to train a common model without sharing their private training data. In practice, federated optimization struggles with sub-optimal model utility because data is not independent and identically distribu
Externí odkaz:
https://doaj.org/article/09c8d0aa6936474bb77dbd525abbaac5
Autor:
Scheliga, Sebastian, Dohrn, Maike F., Habel, Ute, Lampert, Angelika, Rolke, Roman, Lischka, Annette, van den Braak, Noortje, Spehr, Marc, Jo, Han-Gue, Kellermann, Thilo
Publikováno v:
In The Journal of Pain June 2024 25(6)
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients. Although training data entirely resides with the clients, recent work shows that training data can be reconstructed fro
Externí odkaz:
http://arxiv.org/abs/2108.04725
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.