Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Xie, Yuege"'
Concept shift is a prevailing problem in natural tasks like medical image segmentation where samples usually come from different subpopulations with variant correlations between features and labels. One common type of concept shift in medical image s
Externí odkaz:
http://arxiv.org/abs/2210.01891
Sparse shrunk additive models and sparse random feature models have been developed separately as methods to learn low-order functions, where there are few interactions between variables, but neither offers computational efficiency. On the other hand,
Externí odkaz:
http://arxiv.org/abs/2112.04002
We propose a computationally-friendly adaptive learning rate schedule, "AdaLoss", which directly uses the information of the loss function to adjust the stepsize in gradient descent methods. We prove that this schedule enjoys linear convergence in li
Externí odkaz:
http://arxiv.org/abs/2109.08282
Motivated by surprisingly good generalization properties of learned deep neural networks in overparameterized scenarios and by the related double descent phenomenon, this paper analyzes the relation between smoothness and low generalization error in
Externí odkaz:
http://arxiv.org/abs/2006.08495
Publikováno v:
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in PMLR 108:1475-1485 (2020)
We prove that the norm version of the adaptive stochastic gradient method (AdaGrad-Norm) achieves a linear convergence rate for a subset of either strongly convex functions or non-convex functions that satisfy the Polyak Lojasiewicz (PL) inequality.
Externí odkaz:
http://arxiv.org/abs/1908.10525
Fourier ptychographic microscopy (FPM) is a computational imaging technique that overcomes the physical space-bandwidth product (SBP) limit of a conventional microscope by applying angular diversity illuminations. In the usual model of FPM, the micro
Externí odkaz:
http://arxiv.org/abs/1710.06717
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 36:8691-8699
We propose a computationally-friendly adaptive learning rate schedule, "AdaLoss", which directly uses the information of the loss function to adjust the stepsize in gradient descent methods. We prove that this schedule enjoys linear convergence in li
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.