Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Chundawat, Vikram S"'
Autor:
Chundawat, Vikram S, Niroula, Pushkar, Dhungana, Prasanna, Schoepf, Stefan, Mandal, Murari, Brintrup, Alexandra
Federated learning (FL) has enabled collaborative model training across decentralized data sources or clients. While adding new participants to a shared model does not pose great technical hurdles, the removal of a participant and their related infor
Externí odkaz:
http://arxiv.org/abs/2410.04144
Autor:
Tarun, Ayush K, Chundawat, Vikram S, Mandal, Murari, Tan, Hong Ming, Chen, Bowei, Kankanhalli, Mohan
Quantifying the value of data within a machine learning workflow can play a pivotal role in making more strategic decisions in machine learning initiatives. The existing Shapley value based frameworks for data valuation in machine learning are comput
Externí odkaz:
http://arxiv.org/abs/2402.09288
With the introduction of data protection and privacy regulations, it has become crucial to remove the lineage of data on demand from a machine learning (ML) model. In the last few years, there have been notable developments in machine unlearning to r
Externí odkaz:
http://arxiv.org/abs/2210.08196
Synthetic tabular data generation becomes crucial when real data is limited, expensive to collect, or simply cannot be used due to privacy concerns. However, producing good quality synthetic data is challenging. Several probabilistic, statistical, ge
Externí odkaz:
http://arxiv.org/abs/2207.05295
Machine unlearning has become an important area of research due to an increasing need for machine learning (ML) applications to comply with the emerging data privacy regulations. It facilitates the provision for removal of certain set or class of dat
Externí odkaz:
http://arxiv.org/abs/2205.08096
Modern privacy regulations grant citizens the right to be forgotten by products, services and companies. In case of machine learning (ML) applications, this necessitates deletion of data not only from storage archives but also from ML models. Due to
Externí odkaz:
http://arxiv.org/abs/2201.05629
Unlearning the data observed during the training of a machine learning (ML) model is an important task that can play a pivotal role in fortifying the privacy and security of ML-based applications. This paper raises the following questions: (i) can we
Externí odkaz:
http://arxiv.org/abs/2111.08947
Publikováno v:
IEEE Transactions on Neural Networks and Learning Systems; September 2024, Vol. 35 Issue: 9 p13046-13055, 10p
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.