Zobrazeno 1 - 10
of 395
pro vyhledávání: '"Nguyen, Thanh V."'
We study the implicit regularization of gradient descent towards structured sparsity via a novel neural reparameterization, which we call a diagonally grouped linear neural network. We show the following intriguing property of our reparameterization:
Externí odkaz:
http://arxiv.org/abs/2301.12540
In this paper, we study the implicit bias of gradient descent for sparse regression. We extend results on regression with quadratic parametrization, which amounts to depth-2 diagonal linear networks, to more general depth-N networks, under more reali
Externí odkaz:
http://arxiv.org/abs/2108.05574
Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution. Here, we assume an unknown signal to lie in the range of some pre-trained gene
Externí odkaz:
http://arxiv.org/abs/2102.12643
Publikováno v:
npj Computational Materials (2020)6:164
Surrogate models for partial-differential equations are widely used in the design of meta-materials to rapidly evaluate the behavior of composable components. However, the training cost of accurate surrogates by machine learning can rapidly increase
Externí odkaz:
http://arxiv.org/abs/2008.12649
Showing items that do not match search query intent degrades customer experience in e-commerce. These mismatches result from counterfactual biases of the ranking algorithms toward noisy behavioral signals such as clicks and purchases in the search lo
Externí odkaz:
http://arxiv.org/abs/2005.03624
Publikováno v:
International Journal of General Medicine, Vol Volume 16, Pp 1695-1703 (2023)
Van Thang Nguyen,1,2,* Hong Duc Pham,2,3,* Van Phan Nguyen Thanh,4 Thanh Dung Le5,6 1Radiology Department, Hai Duong Medical Technical University, Hai Duong, Vietnam; 2Radiology Department, Hanoi Medical University, Hanoi, Vietnam; 3Radiology
Externí odkaz:
https://doaj.org/article/c54a3db85e814e0a9fcf8ddb5f5d7681
A remarkable recent discovery in machine learning has been that deep neural networks can achieve impressive performance (in terms of both lower training error and higher generalization capacity) in the regime where they are massively over-parameteriz
Externí odkaz:
http://arxiv.org/abs/1911.11983
We provide a series of results for unsupervised learning with autoencoders. Specifically, we study shallow two-layer autoencoder architectures with shared weights. We focus on three generative models for data that are common in statistical machine le
Externí odkaz:
http://arxiv.org/abs/1806.00572
Most existing algorithms for dictionary learning assume that all entries of the (high-dimensional) input data are fully observed. However, in several practical applications (such as hyper-spectral imaging or blood glucose monitoring), only an incompl
Externí odkaz:
http://arxiv.org/abs/1804.09217
We introduce a new, systematic framework for visualizing information flow in deep networks. Specifically, given any trained deep convolutional network model and a given test image, our method produces a compact support in the image domain that corres
Externí odkaz:
http://arxiv.org/abs/1711.06221