Zobrazeno 1 - 10
of 58
pro vyhledávání: '"Chidambaram, Muthu"'
The use of guidance in diffusion models was originally motivated by the premise that the guidance-modified score is that of the data distribution tilted by a conditional likelihood raised to some power. In this work we clarify this misconception by r
Externí odkaz:
http://arxiv.org/abs/2409.13074
Autor:
Chidambaram, Muthu, Ge, Rong
A machine learning model is calibrated if its predicted probability for an outcome matches the observed frequency for that outcome conditional on the model prediction. This property has become increasingly important as the impact of machine learning
Externí odkaz:
http://arxiv.org/abs/2406.04068
Informally, a model is calibrated if its predictions are correct with a probability that matches the confidence of the prediction. By far the most common method in the literature for measuring calibration is the expected calibration error (ECE). Rece
Externí odkaz:
http://arxiv.org/abs/2402.10046
Autor:
Chidambaram, Muthu, Ge, Rong
Data augmentation has been pivotal in successfully training deep learning models on classification tasks over the past decade. An important subclass of data augmentation techniques - which includes both label smoothing and Mixup - involves modifying
Externí odkaz:
http://arxiv.org/abs/2402.06855
Autor:
Chidambaram, Muthu, Ge, Rong
Despite the impressive generalization capabilities of deep neural networks, they have been repeatedly shown to be overconfident when they are wrong. Fixing this issue is known as model calibration, and has consequently received much attention in the
Externí odkaz:
http://arxiv.org/abs/2306.00740
Sparse coding, which refers to modeling a signal as sparse linear combinations of the elements of a learned dictionary, has proven to be a successful (and interpretable) approach in applications such as signal processing, computer vision, and medical
Externí odkaz:
http://arxiv.org/abs/2302.12715
Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels. In recent years, Mixup has become a standard primitive used in the training of state-of-the-art image classification mode
Externí odkaz:
http://arxiv.org/abs/2210.13512
In the Mixup training paradigm, a model is trained using convex combinations of data points and their associated labels. Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirica
Externí odkaz:
http://arxiv.org/abs/2110.07647
Autor:
Chidambaram, Muthu, Ge, Rong
Despite the impressive generalization capabilities of deep neural networks, they have been repeatedly shown to poorly estimate their predictive uncertainty - in other words, they are frequently overconfident when they are wrong. Fixing this issue is
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1be18cb94c91bf567499692948cc992d
Autor:
Arunraj Sambandam, Bharat Kumar Ramalingam Jeyashankaran, Vinoth Thangamani, Sudeep Kumar Velur Nagendra Reddy, Chidambaram Muthu, S. Raju
Publikováno v:
Journal of Orthopedics and Joint Surgery. 3:91-94