Zobrazeno 1 - 10
of 173
pro vyhledávání: '"Liu, Jiashuo"'
For tabular datasets, the change in the relationship between the label and covariates ($Y|X$-shifts) is common due to missing variables (a.k.a. confounders). Since it is impossible to generalize to a completely new and unknown domain, we study models
Externí odkaz:
http://arxiv.org/abs/2410.07395
Graph Neural Networks (GNNs) are widely used for node classification tasks but often fail to generalize when training and test nodes come from different distributions, limiting their practicality. To overcome this, recent approaches adopt invariant l
Externí odkaz:
http://arxiv.org/abs/2406.01066
We establish a new model-agnostic optimization framework for out-of-distribution generalization via multicalibration, a criterion that ensures a predictor is calibrated across a family of overlapping groups. Multicalibration is shown to be associated
Externí odkaz:
http://arxiv.org/abs/2406.00661
The performance of learning models often deteriorates when deployed in out-of-sample environments. To ensure reliable deployment, we propose a stability evaluation criterion based on distributional perturbations. Conceptually, our stability evaluatio
Externí odkaz:
http://arxiv.org/abs/2405.03198
Autor:
Zhang, Fengda, He, Qianpei, Kuang, Kun, Liu, Jiashuo, Chen, Long, Wu, Chao, Xiao, Jun, Zhang, Hanwang
Facial Attribute Classification (FAC) holds substantial promise in widespread applications. However, FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations. This unfairne
Externí odkaz:
http://arxiv.org/abs/2403.06606
Generalizing to out-of-distribution (OOD) data or unseen domain, termed OOD generalization, still lacks appropriate theoretical guarantees. Canonical OOD bounds focus on different distance measurements between source and target domains but fail to co
Externí odkaz:
http://arxiv.org/abs/2403.06392
Machine learning models, while progressively advanced, rely heavily on the IID assumption, which is often unfulfilled in practice due to inevitable distribution shifts. This renders them susceptible and untrustworthy for deployment in risk-sensitive
Externí odkaz:
http://arxiv.org/abs/2403.01874
Machine learning algorithms minimizing average risk are susceptible to distributional shifts. Distributionally Robust Optimization (DRO) addresses this issue by optimizing the worst-case risk within an uncertainty set. However, DRO suffers from over-
Externí odkaz:
http://arxiv.org/abs/2311.05054
Different distribution shifts require different interventions, and algorithms must be grounded in the specific shifts they address. However, methodological development for robust algorithms typically relies on structural assumptions that lack empiric
Externí odkaz:
http://arxiv.org/abs/2307.05284
Autor:
Zhu, Didi, Li, Zexi, Zhang, Min, Yuan, Junkun, Shao, Yunfeng, Liu, Jiashuo, Kuang, Kun, Li, Yinchuan, Wu, Chao
Large-scale vision-language (V-L) models have demonstrated remarkable generalization capabilities for downstream tasks through prompt tuning. However, the mechanisms behind the learned text representations are unknown, limiting further generalization
Externí odkaz:
http://arxiv.org/abs/2306.15955