Zobrazeno 1 - 10
of 7 773
pro vyhledávání: '"Bellet A"'
We study Federated Causal Inference, an approach to estimate treatment effects from decentralized data across centers. We compare three classes of Average Treatment Effect (ATE) estimators derived from the Plug-in G-Formula, ranging from simple meta-
Externí odkaz:
http://arxiv.org/abs/2410.16870
We present Nebula, a system for differential private histogram estimation of data distributed among clients. Nebula enables clients to locally subsample and encode their data such that an untrusted server learns only data values that meet an aggregat
Externí odkaz:
http://arxiv.org/abs/2409.09676
We formulate well-posed continuous-time generative flows for learning distributions that are supported on low-dimensional manifolds through Wasserstein proximal regularizations of $f$-divergences. Wasserstein-1 proximal operators regularize $f$-diver
Externí odkaz:
http://arxiv.org/abs/2407.11901
We present a novel method for training score-based generative models which uses nonlinear noising dynamics to improve learning of structured distributions. Generalizing to a nonlinear drift allows for additional structure to be incorporated into the
Externí odkaz:
http://arxiv.org/abs/2405.15625
In this paper, we introduce a data augmentation approach specifically tailored to enhance intersectional fairness in classification tasks. Our method capitalizes on the hierarchical structure inherent to intersectionality, by viewing groups as inters
Externí odkaz:
http://arxiv.org/abs/2405.14521
Machine learning models can be trained with formal privacy guarantees via differentially private optimizers such as DP-SGD. In this work, we focus on a threat model where the adversary has access only to the final model, with no visibility into inter
Externí odkaz:
http://arxiv.org/abs/2405.14457
In this paper, we propose Wasserstein proximals of $\alpha$-divergences as suitable objective functionals for learning heavy-tailed distributions in a stable manner. First, we provide sufficient, and in some cases necessary, relations among data dime
Externí odkaz:
http://arxiv.org/abs/2405.13962
We study conformal prediction in the one-shot federated learning setting. The main goal is to compute marginally and training-conditionally valid prediction sets, at the server-level, in only one round of communication between the agents and the serv
Externí odkaz:
http://arxiv.org/abs/2405.12567
Decentralized Gradient Descent (D-GD) allows a set of users to perform collaborative learning without sharing their data by iteratively averaging local model updates with their neighbors in a network graph. The absence of direct communication between
Externí odkaz:
http://arxiv.org/abs/2402.10001
The popularity of federated learning comes from the possibility of better scalability and the ability for participants to keep control of their data, improving data security and sovereignty. Unfortunately, sharing model updates also creates a new pri
Externí odkaz:
http://arxiv.org/abs/2402.07471