Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Zhou, Fengwei"'
Classical object detectors are incapable of detecting novel class objects that are not encountered before. Regarding this issue, Open-Vocabulary Object Detection (OVOD) is proposed, which aims to detect the objects in the candidate class list. Howeve
Externí odkaz:
http://arxiv.org/abs/2403.09433
The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust out-of-distribution
Externí odkaz:
http://arxiv.org/abs/2306.02595
Unraveling the reasons behind the remarkable success and exceptional generalization capabilities of deep neural networks presents a formidable challenge. Recent insights from random matrix theory, specifically those concerning the spectral analysis o
Externí odkaz:
http://arxiv.org/abs/2304.02911
Autor:
Sun, Rui, Zhou, Fengwei, Dong, Zhenhua, Xie, Chuanlong, Hong, Lanqing, Li, Jiawei, Zhang, Rui, Li, Zhen, Li, Zhenguo
In this work, we propose {\it Fair-CDA}, a fine-grained data augmentation strategy for imposing fairness constraints. We use a feature disentanglement method to extract the features highly related to the sensitive attributes. Then we show that group
Externí odkaz:
http://arxiv.org/abs/2304.00295
Autor:
Dong, Qishi, Muhammad, Awais, Zhou, Fengwei, Xie, Chuanlong, Hu, Tianyang, Yang, Yongxin, Bae, Sung-Ho, Li, Zhenguo
Recent advances on large-scale pre-training have shown great potentials of leveraging a large set of Pre-Trained Models (PTMs) for improving Out-of-Distribution (OoD) generalization, for which the goal is to perform well on possible unseen domains af
Externí odkaz:
http://arxiv.org/abs/2210.09236
Contrastive learning, especially self-supervised contrastive learning (SSCL), has achieved great success in extracting powerful features from unlabeled data. In this work, we contribute to the theoretical understanding of SSCL and uncover its connect
Externí odkaz:
http://arxiv.org/abs/2205.14814
Deep neural networks are susceptible to adversarially crafted, small and imperceptible changes in the natural inputs. The most effective defense mechanism against these examples is adversarial training which constructs adversarial examples during tra
Externí odkaz:
http://arxiv.org/abs/2111.05073
Autor:
Zhou, Kaichen, Hong, Lanqing, Hu, Shoukang, Zhou, Fengwei, Ru, Binxin, Feng, Jiashi, Li, Zhenguo
Publikováno v:
Transactions on Machine Learning Research 2022
Automated machine learning (AutoML) usually involves several crucial components, such as Data Augmentation (DA) policy, Hyper-Parameter Optimization (HPO), and Neural Architecture Search (NAS). Although many strategies have been developed for automat
Externí odkaz:
http://arxiv.org/abs/2109.05765
Recent advances on Out-of-Distribution (OoD) generalization reveal the robustness of deep learning models against distribution shifts. However, existing works focus on OoD algorithms, such as invariant risk minimization, domain generalization, or sta
Externí odkaz:
http://arxiv.org/abs/2109.02038
Autor:
Awais, Muhammad, Zhou, Fengwei, Xu, Hang, Hong, Lanqing, Luo, Ping, Bae, Sung-Ho, Li, Zhenguo
Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models. However, previous works focus on impr
Externí odkaz:
http://arxiv.org/abs/2109.00946