Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Mai, Zheda"'
Parameter-efficient transfer learning (PETL) has attracted significant attention lately, due to the increasing size of pre-trained models and the need to fine-tune (FT) them for superior downstream performance. This community-wide enthusiasm has spar
Externí odkaz:
http://arxiv.org/abs/2409.16434
Autor:
Mai, Zheda, Chowdhury, Arpita, Zhang, Ping, Tu, Cheng-Hao, Chen, Hong-You, Pahuja, Vardaan, Berger-Wolf, Tanya, Gao, Song, Stewart, Charles, Su, Yu, Chao, Wei-Lun
Fine-tuning is arguably the most straightforward way to tailor a pre-trained model (e.g., a foundation model) to downstream applications, but it also comes with the risk of losing valuable knowledge the model had learned in pre-training. For example,
Externí odkaz:
http://arxiv.org/abs/2409.16223
Autor:
Kil, Jihyung, Mai, Zheda, Lee, Justin, Wang, Zihe, Cheng, Kerrie, Wang, Lemeng, Liu, Ye, Chowdhury, Arpita, Chao, Wei-Lun
The ability to compare objects, scenes, or situations is crucial for effective decision-making and problem-solving in everyday life. For instance, comparing the freshness of apples enables better choices during grocery shopping, while comparing sofa
Externí odkaz:
http://arxiv.org/abs/2407.16837
Autor:
Tu, Cheng-Hao, Chen, Hong-You, Mai, Zheda, Zhong, Jike, Pahuja, Vardaan, Berger-Wolf, Tanya, Gao, Song, Stewart, Charles, Su, Yu, Chao, Wei-Lun
We propose a learning problem involving adapting a pre-trained source model to the target domain for classifying all classes that appeared in the source data, using target data that covers only a partial label space. This problem is practical, as it
Externí odkaz:
http://arxiv.org/abs/2311.01420
Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation. Most existing methods rely on Class Activation Maps (CAM) to derive pixel-level pseudo-labels and use th
Externí odkaz:
http://arxiv.org/abs/2305.05803
Intermediate features of a pre-trained model have been shown informative for making accurate predictions on downstream tasks, even if the model backbone is kept frozen. The key challenge is how to utilize these intermediate features given their gigan
Externí odkaz:
http://arxiv.org/abs/2212.03220
Publikováno v:
Journal of Visual Communication and Image Representation 2023
Weakly supervised semantic segmentation (WSSS) with only image-level supervision is a challenging task. Most existing methods exploit Class Activation Maps (CAM) to generate pixel-level pseudo labels for supervised training. However, due to the local
Externí odkaz:
http://arxiv.org/abs/2203.07239
Conversational Recommendation Systems (CRSs) have recently started to leverage pretrained language models (LM) such as BERT for their ability to semantically interpret a wide range of preference statement variations. However, pretrained LMs are well-
Externí odkaz:
http://arxiv.org/abs/2201.06224
Online class-incremental continual learning (CL) studies the problem of learning new classes continually from an online non-stationary data stream, intending to adapt to new data while mitigating catastrophic forgetting. While memory replay has shown
Externí odkaz:
http://arxiv.org/abs/2103.13885
Online continual learning for image classification studies the problem of learning to classify images from an online stream of data and tasks, where tasks may include new classes (class incremental) or data nonstationarity (domain incremental). One o
Externí odkaz:
http://arxiv.org/abs/2101.10423