Zobrazeno 1 - 10
of 10 692
pro vyhledávání: '"Massei, A."'
Publikováno v:
The Bryologist, 2018 Jan 01. 121(3), 253-263.
Externí odkaz:
https://www.jstor.org/stable/26774978
Autor:
Chun, Kwok P, Octavianti, Thanti, Papacharalampous, Georgia, Tyralis, Hristos, Sutanto, Samuel J., Terskii, Pavel, Mazzoglio, Paola, Treppiedi, Dario, Rivera, Juan, Dogulu, Nilay, Olusola, Adeyemi, Dieppois, Bastien, Dembélé, Moctar, Moulds, Simon, Li, Cheng, Morales-Marin, Luis Alejandro, Macdonald, Neil, Amoussou, Toundji Olivier, Yonaba, Roland, Obahoundje, Salomon, Massei, Nicolas, Hannah, David M., Chidepudi, Sivarama Krishna Reddy, Hamududu, Byman
We have witnessed and experienced increasing compound extreme events resulting from simultaneous or sequential occurrence of multiple events in a changing climate. In addition to a growing demand for a clearer explanation of compound risks from a hyd
Externí odkaz:
http://arxiv.org/abs/2409.19003
Autor:
Massei, Stefano, Saluzzi, Luca
Solving large-scale continuous-time algebraic Riccati equations is a significant challenge in various control theory applications. This work demonstrates that when the matrix coefficients of the equation are quasiseparable, the solution also exhibits
Externí odkaz:
http://arxiv.org/abs/2408.16569
In this paper, we propose some Chebyshev polynomials of the 1st-kind which produce optimal bound for a polynomial dependent constant involved in the AMG $V$-cycle error bound and do not require information about the spectrum of matrices. We formulate
Externí odkaz:
http://arxiv.org/abs/2407.09848
Publikováno v:
Il Foro Italiano, 2012 Nov 01. 135(11), 639/640-645/646.
Externí odkaz:
https://www.jstor.org/stable/26639188
Autor:
Xia, Lu, Massei, Stefano
Adaptive first-order optimizers are fundamental tools in deep learning, although they may suffer from poor generalization due to the nonuniform gradient scaling. In this work, we propose AdamL, a novel variant of the Adam optimizer, that takes into a
Externí odkaz:
http://arxiv.org/abs/2312.15295
Publikováno v:
Il Foro Italiano, 1996 Dec 01. 119(12), 737/738-739/740.
Externí odkaz:
https://www.jstor.org/stable/23191079
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Massei, Stefano, Tudisco, Francesco
We consider the problem of attaining either the maximal increase or reduction of the robustness of a complex network by means of a bounded modification of a subset of the edge weights. We propose two novel strategies combining Krylov subspace approxi
Externí odkaz:
http://arxiv.org/abs/2303.04971
When training neural networks with low-precision computation, rounding errors often cause stagnation or are detrimental to the convergence of the optimizers; in this paper we study the influence of rounding errors on the convergence of the gradient d
Externí odkaz:
http://arxiv.org/abs/2301.09511