Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Shen, Chengchao"'
The existing contrastive learning methods mainly focus on single-grained representation learning, e.g., part-level, object-level or scene-level ones, thus inevitably neglecting the transferability of representations on other granularity levels. In th
Externí odkaz:
http://arxiv.org/abs/2407.02014
The existing contrastive learning methods widely adopt one-hot instance discrimination as pretext task for self-supervised learning, which inevitably neglects rich inter-instance similarities among natural images, then leading to potential representa
Externí odkaz:
http://arxiv.org/abs/2306.12243
Asymmetric appearance between positive pair effectively reduces the risk of representation degradation in contrastive learning. However, there are still a mass of appearance similarities between positive pair constructed by the existing methods, whic
Externí odkaz:
http://arxiv.org/abs/2306.02854
Federated learning achieves joint training of deep models by connecting decentralized data sources, which can significantly mitigate the risk of privacy leakage. However, in a more general case, the distributions of labels among clients are different
Externí odkaz:
http://arxiv.org/abs/2212.08883
Publikováno v:
In Expert Systems With Applications 5 March 2025 263
Publikováno v:
In Pattern Recognition February 2025 158
Autor:
Zheng, Tongya, Feng, Zunlei, Wang, Yu, Shen, Chengchao, Song, Mingli, Wang, Xingen, Wang, Xinyu, Chen, Chun, Xu, Hao
The dynamics of temporal networks lie in the continuous interactions between nodes, which exhibit the dynamic node preferences with time elapsing. The challenges of mining temporal networks are thus two-fold: the dynamic structure of networks and the
Externí odkaz:
http://arxiv.org/abs/2111.11886
Autor:
Fang, Gongfan, Bao, Yifan, Song, Jie, Wang, Xinchao, Xie, Donglin, Shen, Chengchao, Song, Mingli
Knowledge distillation~(KD) aims to craft a compact student model that imitates the behavior of a pre-trained teacher in a target domain. Prior KD approaches, despite their gratifying results, have largely relied on the premise that \emph{in-domain}
Externí odkaz:
http://arxiv.org/abs/2110.15094
Model inversion, whose goal is to recover training data from a pre-trained model, has been recently proved feasible. However, existing inversion methods usually suffer from the mode collapse problem, where the synthesized instances are highly similar
Externí odkaz:
http://arxiv.org/abs/2105.08584
Generative Adversarial Networks (GANs) have demonstrated unprecedented success in various image generation tasks. The encouraging results, however, come at the price of a cumbersome training process, during which the generator and discriminator are a
Externí odkaz:
http://arxiv.org/abs/2103.00430