Zobrazeno 1 - 10
of 109
pro vyhledávání: '"Qiang, Wenwen"'
Leveraging the development of structural causal model (SCM), researchers can establish graphical models for exploring the causal mechanisms behind machine learning techniques. As the complexity of machine learning applications rises, single-world int
Externí odkaz:
http://arxiv.org/abs/2406.11501
Multi-modal methods establish comprehensive superiority over uni-modal methods. However, the imbalanced contributions of different modalities to task-dependent predictions constantly degrade the discriminative performance of canonical multi-modal met
Externí odkaz:
http://arxiv.org/abs/2406.11490
Without loss of generality, existing machine learning techniques may learn spurious correlation dependent on the domain, which exacerbates the generalization of models in out-of-distribution (OOD) scenarios. To address this issue, recent works build
Externí odkaz:
http://arxiv.org/abs/2406.11517
Pre-trained large-scale models have become a major research focus, but their effectiveness is limited in real-world applications due to diverse data distributions. In contrast, humans excel at decision-making across various domains by learning reusab
Externí odkaz:
http://arxiv.org/abs/2405.15289
Autor:
Wang, Jingyao, Qiang, Wenwen, Song, Zeen, Si, Lingyu, Li, Jiangmeng, Zheng, Changwen, Su, Bing
The goal of universality in self-supervised learning (SSL) is to learn universal representations from unlabeled data and achieve excellent performance on all samples and tasks. However, these methods lack explicit modeling of the universality in the
Externí odkaz:
http://arxiv.org/abs/2405.01053
Micro-expressions (MEs) are involuntary movements revealing people's hidden feelings, which has attracted numerous interests for its objectivity in emotion detection. However, despite its wide applications in various scenarios, micro-expression recog
Externí odkaz:
http://arxiv.org/abs/2404.12024
Autor:
Zhang, Jianqi, Wang, Jingyao, Qiang, Wenwen, Xu, Fanjiang, Zheng, Changwen, Sun, Fuchun, Xiong, Hui
Transformer-based methods have made significant progress in time series forecasting (TSF). They primarily handle two types of tokens, i.e., temporal tokens that contain all variables of the same timestamp, and variable tokens that contain all input t
Externí odkaz:
http://arxiv.org/abs/2404.10337
Self-Supervised Learning (SSL) methods harness the concept of semantic invariance by utilizing data augmentation strategies to produce similar representations for different deformations of the same input. Essentially, the model captures the shared in
Externí odkaz:
http://arxiv.org/abs/2403.01549
Autor:
Li, Jiangmeng, Song, Fei, Jin, Yifan, Qiang, Wenwen, Zheng, Changwen, Sun, Fuchun, Xiong, Hui
As a novel and effective fine-tuning paradigm based on large-scale pre-trained language models (PLMs), prompt-tuning aims to reduce the gap between downstream tasks and pre-training objectives. While prompt-tuning has yielded continuous advancements
Externí odkaz:
http://arxiv.org/abs/2401.14166
Graph contrastive learning (GCL) aims to align the positive features while differentiating the negative features in the latent space by minimizing a pair-wise contrastive loss. As the embodiment of an outstanding discriminative unsupervised graph rep
Externí odkaz:
http://arxiv.org/abs/2312.14222