Zobrazeno 1 - 10
of 40
pro vyhledávání: '"Shen, Zheyan"'
To ensure the out-of-distribution (OOD) generalization performance, traditional domain generalization (DG) methods resort to training on data from multiple sources with different underlying distributions. And the success of those DG methods largely d
Externí odkaz:
http://arxiv.org/abs/2305.15644
The problem of covariate-shift generalization has attracted intensive research attention. Previous stable learning algorithms employ sample reweighting schemes to decorrelate the covariates when there is no explicit domain information about training
Externí odkaz:
http://arxiv.org/abs/2212.00992
Despite the remarkable performance that modern deep neural networks have achieved on independent and identically distributed (I.I.D.) data, they can crash under distribution shifts. Most current evaluation methods for domain generalization (DG) adopt
Externí odkaz:
http://arxiv.org/abs/2204.08040
Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors. It has become common practice in many industries nowadays due to the availability of a growing amount of high
Externí odkaz:
http://arxiv.org/abs/2202.04245
Covariate-shift generalization, a typical case in out-of-distribution (OOD) generalization, requires a good performance on the unknown test distribution, which varies from the accessible training distribution in the form of covariate shift. Recently,
Externí odkaz:
http://arxiv.org/abs/2111.02355
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i.i.d$ testing data. Recently, invariant learning methods for out-of-distrib
Externí odkaz:
http://arxiv.org/abs/2110.12425
Traditional machine learning paradigms are based on the assumption that both training and test data follow the same statistical pattern, which is mathematically referred to as Independent and Identically Distributed ($i.i.d.$). However, in real-world
Externí odkaz:
http://arxiv.org/abs/2108.13624
Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains. The performances of current DG methods largely rely on sufficient labeled data, which are usually costly or unavailable, how
Externí odkaz:
http://arxiv.org/abs/2107.06219
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data. There is an emerging literature on tackling this problem by minimizing
Externí odkaz:
http://arxiv.org/abs/2106.15791
Machine learning algorithms with empirical risk minimization usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts. Recently, some
Externí odkaz:
http://arxiv.org/abs/2105.03818