Distributionally Robust Data Join

Autor: Awasthi, Pranjal, Jung, Christopher, Morgenstern, Jamie
Rok vydání: 2022
Předmět:
DOI: 10.48550/arxiv.2202.05797
Popis: Suppose we are given two datasets: a labeled dataset and unlabeled dataset which also has additional auxiliary features not present in the first dataset. What is the most principled way to use these datasets together to construct a predictor? The answer should depend upon whether these datasets are generated by the same or different distributions over their mutual feature sets, and how similar the test distribution will be to either of those distributions. In many applications, the two datasets will likely follow different distributions, but both may be close to the test distribution. We introduce the problem of building a predictor which minimizes the maximum loss over all probability distributions over the original features, auxiliary features, and binary labels, whose Wasserstein distance is r₁ away from the empirical distribution over the labeled dataset and r₂ away from that of the unlabeled dataset. This can be thought of as a generalization of distributionally robust optimization (DRO), which allows for two data sources, one of which is unlabeled and may contain auxiliary features.
LIPIcs, Vol. 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023), pages 10:1-10:15
Databáze: OpenAIRE