Zobrazeno 1 - 10
of 84
pro vyhledávání: '"Östman, Johan"'
This study investigates the risks of exposing confidential chemical structures when machine learning models trained on these structures are made publicly available. We use membership inference attacks, a common method to assess privacy that is largel
Externí odkaz:
http://arxiv.org/abs/2410.16975
Autor:
Björkdahl, Liv, Pauli, Oskar, Östman, Johan, Ceccobello, Chiara, Lundell, Sara, Kjellberg, Magnus
Data in the healthcare domain arise from a variety of sources and modalities, such as x-ray images, continuous measurements, and clinical notes. Medical practitioners integrate these diverse data types daily to make informed and accurate decisions. W
Externí odkaz:
http://arxiv.org/abs/2408.06943
Local mutual-information privacy (LMIP) is a privacy notion that aims to quantify the reduction of uncertainty about the input data when the output of a privacy-preserving mechanism is revealed. We study the relation of LMIP with local differential p
Externí odkaz:
http://arxiv.org/abs/2405.07596
Autor:
Garg, Sonakshi, Jönsson, Hugo, Kalander, Gustav, Nilsson, Axel, Pirange, Bhhaanu, Valadi, Viktor, Östman, Johan
Federated Learning (FL) is a decentralized learning paradigm, enabling parties to collaboratively train models while keeping their data confidential. Within autonomous driving, it brings the potential of reducing data storage costs, reducing bandwidt
Externí odkaz:
http://arxiv.org/abs/2405.01073
Secure aggregation (SecAgg) is a commonly-used privacy-enhancing mechanism in federated learning, affording the server access only to the aggregate of model updates while safeguarding the confidentiality of individual updates. Despite widespread clai
Externí odkaz:
http://arxiv.org/abs/2403.17775
We address the challenge of federated learning on graph-structured data distributed across multiple clients. Specifically, we focus on the prevalent scenario of interconnected subgraphs, where interconnections between different clients play a critica
Externí odkaz:
http://arxiv.org/abs/2402.19163
We propose FedGT, a novel framework for identifying malicious clients in federated learning with secure aggregation. Inspired by group testing, the framework leverages overlapping groups of clients to identify the presence of malicious clients in the
Externí odkaz:
http://arxiv.org/abs/2305.05506
Onboard machine learning on the latest satellite hardware offers the potential for significant savings in communication and operational costs. We showcase the training of a machine learning model on a satellite constellation for scene classification
Externí odkaz:
http://arxiv.org/abs/2305.04059
The next generation of spacecraft is anticipated to enable various new applications involving onboard processing, machine learning and decentralised operational scenarios. Even though many of these have been previously proposed and evaluated, the ope
Externí odkaz:
http://arxiv.org/abs/2302.02659
Personalized decentralized learning is a promising paradigm for distributed learning, enabling each node to train a local model on its own data and collaborate with other nodes to improve without sharing any data. However, this approach poses signifi
Externí odkaz:
http://arxiv.org/abs/2301.12755