Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Jary Pomponi"'
Publikováno v:
IEEE Access, Vol 11, Pp 11298-11306 (2023)
Recent research has found that neural networks for computer vision are vulnerable to several types of external attacks that modify the input of the model, with the malicious intent of producing a misclassification. With the increase in the number of
Externí odkaz:
https://doaj.org/article/11da8a8819b7449eb7a2c8d1ea06e79e
Publikováno v:
Entropy, Vol 24, Iss 1, p 1 (2021)
In this paper, we propose a new approach to train a deep neural network with multiple intermediate auxiliary classifiers, branching from it. These ‘multi-exits’ models can be used to reduce the inference time by performing early exit on the inter
Externí odkaz:
https://doaj.org/article/0bfeb80bee04472fa56ec10de2f528ab
Publikováno v:
Neural Networks. 164:606-616
Catastrophic forgetting (CF) happens whenever a neural network overwrites past knowledge while being trained on new tasks. Common techniques to handle CF include regularization of the weights (using, e.g., their importance on past tasks), and rehears
Publikováno v:
Neural Networks. 144:407-418
In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches. In particular, we propose to extract multiple sub-networks from a single, unt
Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample. In this pa
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ca11dd0e076d1ef1a0a2a1b0c7c55283
http://arxiv.org/abs/2202.02236
http://arxiv.org/abs/2202.02236
Publikováno v:
Entropy
Entropy, Vol 24, Iss 1, p 1 (2022)
Entropy; Volume 24; Issue 1; Pages: 1
Entropy, Vol 24, Iss 1, p 1 (2022)
Entropy; Volume 24; Issue 1; Pages: 1
In this paper, we propose a new approach to train a deep neural network with multiple intermediate auxiliary classifiers, branching from it. These ‘multi-exits’ models can be used to reduce the inference time by performing early exit on the inter
Autor:
Simone Scardapane, Martin Mundt, Tyler L. Hayes, Simone Calderara, Keiland W. Cooper, Christopher Kanan, Eden Belouadah, Lorenzo Pellegrini, Adrian Popescu, Matthias De Lange, Fabio Cuzzolin, Jeremy Forest, Jary Pomponi, Subutai Ahmad, Qi She, Luca Antiga, Gido M. van de Ven, Davide Maltoni, Davide Bacciu, Vincenzo Lomonaco, Joost van de Weijer, Marc Masana, Antonio Carta, Gabriele Graffieti, Andreas S. Tolias, German Ignacio Parisi, Andrea Cossu, Tinne Tuytelaars
Publikováno v:
CVPR Workshops
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning co
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d293de3db110ff7791c5e28c4027ea01
http://hdl.handle.net/11573/1612489
http://hdl.handle.net/11573/1612489
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e.g., interpretability, multi-task learning, and calibration. Because of the intract
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::60ba3b648941a75a8a3e04348385d5a8
http://hdl.handle.net/11573/1503663
http://hdl.handle.net/11573/1503663
Autor:
Jary Pomponi, C. Fanelli
Imaging Cherenkov detectors are largely used for particle identification (PID) in nuclear and particle physics experiments, where developing fast reconstruction algorithms is becoming of paramount importance to allow for near real time calibration an
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::33624295f77c0807839876718ecc51ae
http://hdl.handle.net/11573/1409486
http://hdl.handle.net/11573/1409486
Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures. Previous approaches to the problem have considered either the p
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4cb485b732b85cd0dbc5f82ff9c891bd
http://hdl.handle.net/11568/1127303
http://hdl.handle.net/11568/1127303