Zobrazeno 1 - 10
of 66
pro vyhledávání: '"Luca, Zanni"'
Publikováno v:
Applied Mathematics in Science and Engineering, Vol 31, Iss 1 (2023)
Finite-sum problems appear as the sample average approximation of a stochastic optimization problem and often arise in machine learning applications with large scale data sets. A very popular approach to face finite-sum problems is the stochastic gra
Externí odkaz:
https://doaj.org/article/a842c4d1210c4b04a3d5361b910b62aa
Publikováno v:
Mathematics in Engineering, Vol 5, Iss 1, Pp 1-21 (2023)
In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the so-called hyperparameters, namely the parameters that a
Externí odkaz:
https://doaj.org/article/df7905a04667459db82a2f114e1ce4ed
Publikováno v:
In Procedia Manufacturing 2019 38:488-496
Publikováno v:
Communications in Computer and Information Science ISBN: 9783031340192
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::8623c623a7ddc91366093de506bf52f7
https://doi.org/10.1007/978-3-031-34020-8_2
https://doi.org/10.1007/978-3-031-34020-8_2
Publikováno v:
Applied Mathematics and Computation. 356:312-327
The role of the steplength selection strategies in gradient methods has been widely investigated in the last decades. Starting from the work of Barzilai and Borwein (1988), many efficient steplength rules have been designed, that contributed to make
In order to solve constrained optimization problems on convex sets, the class of scaled gradient projection methods is often exploited in combination with non-monotone Armijo–like line search strategies. These techniques are adopted for efficiently
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::48ecd946a18c94fafba5969eb3fdb0ee
https://hdl.handle.net/11591/482069
https://hdl.handle.net/11591/482069
Gradient projection methods represent effective tools for solving large-scale constrained optimization problems thanks to their simple implementation and low computational cost per iteration. Despite these good properties, a slow convergence rate can
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4c024ae093414e571d242c2dc00dfd82
https://hdl.handle.net/11591/481628
https://hdl.handle.net/11591/481628
In 1988, Barzilai and Borwein published a pioneering paper which opened the way to inexpensively accelerate first-order. In more detail, in the framework of unconstrained optimization, Barzilai and Borwein developed two strategies to select the step
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d08d4fef161c5f2f0e0c93f72904ceb8
http://hdl.handle.net/11591/463149
http://hdl.handle.net/11591/463149
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783030390808
NUMTA(1)
NUMTA(1)
Gradient Projection (GP) methods are a very popular tool to address box-constrained quadratic problems thanks to their simple implementation and low computational cost per iteration with respect, for example, to Newton approaches. It is however possi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2c63f4654589b5e84658615183b7e4b4
http://hdl.handle.net/11392/2413818
http://hdl.handle.net/11392/2413818
Publikováno v:
Rendiconti di Matematica e delle Sue Applicazioni, Vol 23, Iss 2, Pp 257-275 (2003)
We consider the numerical solution of the large convex quadratic program arising in training the learning machines named support vector machines. Since the matrix of the quadratic form is dense and generally large, solution approaches based on explic
Externí odkaz:
https://doaj.org/article/0f64536b6a1d476198d1579759efb513