Zobrazeno 1 - 10
of 68
pro vyhledávání: '"Maly, Johannes"'
In this work, we analyze the relation between reparametrizations of gradient flow and the induced implicit bias in linear models, which encompass various basic regression tasks. In particular, we aim at understanding the influence of the model parame
Externí odkaz:
http://arxiv.org/abs/2308.04921
Autor:
Dirksen, Sjoerd, Maly, Johannes
We consider covariance estimation of any subgaussian distribution from finitely many i.i.d. samples that are quantized to one bit of information per entry. Recent work has shown that a reliable estimator can be constructed if uniformly distributed di
Externí odkaz:
http://arxiv.org/abs/2307.12613
Autor:
Kümmerle, Christian, Maly, Johannes
We propose a new algorithm for the problem of recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations. Focusing on data matrices that are simultaneously row-sparse and low-rank, we propose and analy
Externí odkaz:
http://arxiv.org/abs/2306.04961
As the array dimension of massive MIMO systems increases to unprecedented levels, two problems occur. First, the spatial stationarity assumption along the antenna elements is no longer valid. Second, the large array size results in an unacceptably hi
Externí odkaz:
http://arxiv.org/abs/2301.04641
Autor:
Maly, Johannes, Saab, Rayan
In this short note, we propose a new method for quantizing the weights of a fully trained neural network. A simple deterministic pre-processing step allows us to quantize network layers via memoryless scalar quantization while preserving the network
Externí odkaz:
http://arxiv.org/abs/2209.03487
In many applications, solutions of numerical problems are required to be non-negative, e.g., when retrieving pixel intensity values or physical densities of a substance. In this context, non-negative least squares (NNLS) is a ubiquitous tool, e.g., w
Externí odkaz:
http://arxiv.org/abs/2207.08437
Publikováno v:
Information and Inference: A Journal of the IMA, 12(3), 04 2023. iaad012
In deep learning it is common to overparameterize neural networks, that is, to use more parameters than training samples. Quite surprisingly training the neural network via (stochastic) gradient descent leads to models that generalize very well, whil
Externí odkaz:
http://arxiv.org/abs/2112.11027
In this self-contained chapter, we revisit a fundamental problem of multivariate statistics: estimating covariance matrices from finitely many independent samples. Based on massive Multiple-Input Multiple-Output (MIMO) systems we illustrate the neces
Externí odkaz:
http://arxiv.org/abs/2106.06190
We consider the classical problem of estimating the covariance matrix of a subgaussian distribution from i.i.d. samples in the novel context of coarse quantization, i.e., instead of having full knowledge of the samples, they are quantized to one or t
Externí odkaz:
http://arxiv.org/abs/2104.01280
Autor:
Maly, Johannes
We consider the problem of recovering an unknown low-rank matrix X with (possibly) non-orthogonal, effectively sparse rank-1 decomposition from measurements y gathered in a linear measurement process A. We propose a variational formulation that lends
Externí odkaz:
http://arxiv.org/abs/2103.05523