Zobrazeno 1 - 10
of 83
pro vyhledávání: '"Shin, Seungjae"'
The objective of domain generalization (DG) is to enhance the transferability of the model learned from a source domain to unobserved domains. To prevent overfitting to a specific domain, Sharpness-Aware Minimization (SAM) reduces source domain's los
Externí odkaz:
http://arxiv.org/abs/2403.07329
For learning with noisy labels, the transition matrix, which explicitly models the relation between noisy label distribution and clean label distribution, has been utilized to achieve the statistical consistency of either the classifier or the risk.
Externí odkaz:
http://arxiv.org/abs/2403.02690
This paper presents FreD, a novel parameterization method for dataset distillation, which utilizes the frequency domain to distill a small-sized synthetic dataset from a large-sized original dataset. Unlike conventional approaches that focus on the s
Externí odkaz:
http://arxiv.org/abs/2311.08819
Training neural networks on a large dataset requires substantial computational costs. Dataset reduction selects or synthesizes data instances based on the large dataset, while minimizing the degradation in generalization performance from the full dat
Externí odkaz:
http://arxiv.org/abs/2303.04449
Autor:
Shin, Seungjae
Publikováno v:
Journal of International Logistics and Trade, 2024, Vol. 22, Issue 1, pp. 22-38.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/JILT-07-2023-0046
Noisy labels are inevitable yet problematic in machine learning society. It ruins the generalization of a classifier by making the classifier over-fitted to noisy labels. Existing methods on noisy label have focused on modifying the classifier during
Externí odkaz:
http://arxiv.org/abs/2205.00690
Existing semi-supervised learning (SSL) algorithms typically assume class-balanced datasets, although the class distributions of many real-world datasets are imbalanced. In general, classifiers trained on a class-imbalanced dataset are biased toward
Externí odkaz:
http://arxiv.org/abs/2110.10368
Publikováno v:
International Conference on Machine Learning (ICML) 2022
Recent advances in diffusion models bring state-of-the-art performance on image generation tasks. However, empirical results from previous research in diffusion models imply an inverse correlation between density estimation and sample generation perf
Externí odkaz:
http://arxiv.org/abs/2106.05527
Knowledge distillation is a method of transferring the knowledge from a pretrained complex teacher model to a student model, so a smaller network can replace a large teacher network at the deployment stage. To reduce the necessity of training a large
Externí odkaz:
http://arxiv.org/abs/2103.08273
A simulation is useful when the phenomenon of interest is either expensive to regenerate or irreproducible with the same context. Recently, Bayesian inference on the distribution of the simulation input parameter has been implemented sequentially to
Externí odkaz:
http://arxiv.org/abs/2102.07770