Zobrazeno 1 - 10
of 44
pro vyhledávání: '"LeJeune, Daniel"'
Estimating out-of-sample risk for models trained on large high-dimensional datasets is an expensive but essential part of the machine learning process, enabling practitioners to optimally tune hyperparameters. Cross-validation (CV) serves as the de f
Externí odkaz:
http://arxiv.org/abs/2409.09781
While effective in practice, iterative methods for solving large systems of linear equations can be significantly affected by problem-dependent condition number quantities. This makes characterizing their time complexity challenging, particularly whe
Externí odkaz:
http://arxiv.org/abs/2405.05818
Autor:
Patil, Pratik, LeJeune, Daniel
We employ random matrix theory to establish consistency of generalized cross validation (GCV) for estimating prediction risks of sketched ridge regression ensembles, enabling efficient and consistent tuning of regularization and sketching parameters.
Externí odkaz:
http://arxiv.org/abs/2310.04357
Autor:
LeJeune, Daniel, Alemohammad, Sina
In order to better understand feature learning in neural networks, we propose a framework for understanding linear models in tangent feature space where the features are allowed to be transformed during training. We consider linear transformations of
Externí odkaz:
http://arxiv.org/abs/2308.15478
Autor:
Alemohammad, Sina, Casco-Rodriguez, Josue, Luzi, Lorenzo, Humayun, Ahmed Imtiaz, Babaei, Hossein, LeJeune, Daniel, Siahkoohi, Ali, Baraniuk, Richard G.
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models. Repeating this process creates an autophagous (self-consuming) loop whose properties
Externí odkaz:
http://arxiv.org/abs/2307.01850
Autor:
Saragadam, Vishwanath, LeJeune, Daniel, Tan, Jasper, Balakrishnan, Guha, Veeraraghavan, Ashok, Baraniuk, Richard G.
Implicit neural representations (INRs) have recently advanced numerous vision-related areas. INR performance depends strongly on the choice of the nonlinear activation function employed in its multilayer perceptron (MLP) network. A wide range of nonl
Externí odkaz:
http://arxiv.org/abs/2301.05187
We take a random matrix theory approach to random sketching and show an asymptotic first-order equivalence of the regularized sketched pseudoinverse of a positive semidefinite matrix to a certain evaluation of the resolvent of the same matrix. We foc
Externí odkaz:
http://arxiv.org/abs/2211.03751
Autor:
Luzi, Lorenzo, LeJeune, Daniel, Siahkoohi, Ali, Alemohammad, Sina, Saragadam, Vishwanath, Babaei, Hossein, Liu, Naiming, Wang, Zichao, Baraniuk, Richard G.
We study the interpolation capabilities of implicit neural representations (INRs) of images. In principle, INRs promise a number of advantages, such as continuous derivatives and arbitrary sampling, being freed from the restrictions of a raster grid.
Externí odkaz:
http://arxiv.org/abs/2211.00219
Machine learning systems are often applied to data that is drawn from a different distribution than the training distribution. Recent work has shown that for a variety of classification and signal reconstruction problems, the out-of-distribution perf
Externí odkaz:
http://arxiv.org/abs/2210.11589
Is overparameterization a privacy liability? In this work, we study the effect that the number of parameters has on a classifier's vulnerability to membership inference attacks. We first demonstrate how the number of parameters of a model can induce
Externí odkaz:
http://arxiv.org/abs/2205.14055