Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Bae, Wonho"'
The ProbCover method of Yehuda et al. is a well-motivated algorithm for active learning in low-budget regimes, which attempts to "cover" the data distribution with balls of a given radius at selected data points. We demonstrate, however, that the per
Externí odkaz:
http://arxiv.org/abs/2407.12212
Autor:
Bae, Wonho, Ren, Yi, Ahmed, Mohamad Osama, Tung, Frederick, Sutherland, Danica J., Oliveira, Gabriel L.
Although neural networks are conventionally optimized towards zero training loss, it has been recently learned that targeting a non-zero training loss threshold, referred to as a flood level, often enables better test time generalization. Current app
Externí odkaz:
http://arxiv.org/abs/2311.02891
Most meta-learning methods assume that the (very small) context set used to establish a new task at test time is passively provided. In some settings, however, it is feasible to actively select which points to label; the potential gain from a careful
Externí odkaz:
http://arxiv.org/abs/2311.02879
Publikováno v:
ICLR 2023
In deep learning, transferring information from a pretrained network to a downstream task by finetuning has many benefits. The choice of task head plays an important role in fine-tuning, as the pretrained and downstream tasks are usually different. A
Externí odkaz:
http://arxiv.org/abs/2302.05779
A temporal point process (TPP) is a stochastic process where its realization is a sequence of discrete events in time. Recent work in TPPs model the process using a neural network in a supervised learning framework, where a training set is a collecti
Externí odkaz:
http://arxiv.org/abs/2301.12023
Weakly Supervised Object Detection (WSOD) is a task that detects objects in an image using a model trained only on image-level annotations. Current state-of-the-art models benefit from self-supervised instance-level supervision, but since weak superv
Externí odkaz:
http://arxiv.org/abs/2208.07576
We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points. Although this is usually infeasible with deep networks, we use the neural tangent kernel
Externí odkaz:
http://arxiv.org/abs/2206.12569
Publikováno v:
Published at ICML 2023
Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given network's representation: they are often far less expensive to compute and applicable more broadly than infinite width NTKs. For networks with O output units (e.g. a
Externí odkaz:
http://arxiv.org/abs/2206.12543
Semi-weakly supervised semantic segmentation (SWSSS) aims to train a model to identify objects in images based on a small number of images with pixel-level labels, and many more images with only image-level labels. Most existing SWSSS algorithms extr
Externí odkaz:
http://arxiv.org/abs/2205.01233
Autor:
Wei, Yunchao, Zheng, Shuai, Cheng, Ming-Ming, Zhao, Hang, Wang, Liwei, Ding, Errui, Yang, Yi, Torralba, Antonio, Liu, Ting, Sun, Guolei, Wang, Wenguan, Van Gool, Luc, Bae, Wonho, Noh, Junhyug, Seo, Jinhwan, Kim, Gunhee, Zhao, Hao, Lu, Ming, Yao, Anbang, Guo, Yiwen, Chen, Yurong, Zhang, Li, Tan, Chuangchuang, Ruan, Tao, Gu, Guanghua, Wei, Shikui, Zhao, Yao, Dobko, Mariia, Viniavskyi, Ostap, Dobosevych, Oles, Wang, Zhendong, Chen, Zhenyuan, Gong, Chen, Yan, Huanqing, He, Jun
Learning from imperfect data becomes an issue in many industrial applications after the research community has made profound progress in supervised learning from perfectly annotated datasets. The purpose of the Learning from Imperfect Data (LID) work
Externí odkaz:
http://arxiv.org/abs/2010.11724