Zobrazeno 1 - 10
of 955
pro vyhledávání: '"Bonicelli A"'
Autor:
Millunzi, Monica, Bonicelli, Lorenzo, Porrello, Angelo, Credi, Jacopo, Kolm, Petter N., Calderara, Simone
Forgetting presents a significant challenge during incremental training, making it particularly demanding for contemporary AI systems to assimilate new knowledge in streaming data environments. To address this issue, most approaches in Continual Lear
Externí odkaz:
http://arxiv.org/abs/2408.14284
Autor:
Frascaroli, Emanuele, Panariello, Aniello, Buzzega, Pietro, Bonicelli, Lorenzo, Porrello, Angelo, Calderara, Simone
With the emergence of Transformers and Vision-Language Models (VLMs) such as CLIP, fine-tuning large pre-trained models has recently become a prevalent strategy in Continual Learning. This has led to the development of numerous prompting strategies t
Externí odkaz:
http://arxiv.org/abs/2407.15793
Autor:
Menabue, Martin, Frascaroli, Emanuele, Boschini, Matteo, Bonicelli, Lorenzo, Porrello, Angelo, Calderara, Simone
The field of Continual Learning (CL) has inspired numerous researchers over the years, leading to increasingly advanced countermeasures to the issue of catastrophic forgetting. Most studies have focused on the single-class scenario, where each exampl
Externí odkaz:
http://arxiv.org/abs/2407.14249
Autor:
Porrello, Angelo, Bonicelli, Lorenzo, Buzzega, Pietro, Millunzi, Monica, Calderara, Simone, Cucchiara, Rita
The fine-tuning of deep pre-trained models has revealed compositional properties, with multiple specialized modules that can be arbitrarily composed into a single, multi-task model. However, identifying the conditions that promote compositionality re
Externí odkaz:
http://arxiv.org/abs/2405.16350
Autor:
Menabue, Martin, Frascaroli, Emanuele, Boschini, Matteo, Sangineto, Enver, Bonicelli, Lorenzo, Porrello, Angelo, Calderara, Simone
Prompt-tuning methods for Continual Learning (CL) freeze a large pre-trained model and train a few parameter vectors termed prompts. Most of these methods organize these vectors in a pool of key-value pairs and use the input image as query to retriev
Externí odkaz:
http://arxiv.org/abs/2403.06870
We investigate the massive Sine-Gordon model in the finite ultraviolet regime on the two-dimensional Minkowski spacetime $(\mathbb{R}^2,\eta)$ with an additive Gaussian white noise. In particular we construct the expectation value and the correlation
Externí odkaz:
http://arxiv.org/abs/2311.01558
Publikováno v:
Math Phys Anal Geom 27, 16 (2024)
On a $d$-dimensional Riemannian, spin manifold $(M,g)$ we consider non-linear, stochastic partial differential equations for spinor fields, driven by a Dirac operator and coupled to an additive Gaussian, vector-valued white noise. We extend to the ca
Externí odkaz:
http://arxiv.org/abs/2309.16376
Autor:
Bonicelli, Lorenzo, Boschini, Matteo, Frascaroli, Emanuele, Porrello, Angelo, Pennisi, Matteo, Bellitto, Giovanni, Palazzo, Simone, Spampinato, Concetto, Calderara, Simone
Humans can learn incrementally, whereas neural networks forget previously acquired information catastrophically. Continual Learning (CL) approaches seek to bridge this gap by facilitating the transfer of knowledge to both previous tasks (backward tra
Externí odkaz:
http://arxiv.org/abs/2305.03648
In the realm of complex systems, dynamics is often modeled in terms of a non-linear, stochastic, ordinary differential equation (SDE) with either an additive or a multiplicative Gaussian white noise. In addition to a well-established collection of re
Externí odkaz:
http://arxiv.org/abs/2302.10579
Autor:
Bonicelli, Lorenzo, Boschini, Matteo, Porrello, Angelo, Spampinato, Concetto, Calderara, Simone
Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to p
Externí odkaz:
http://arxiv.org/abs/2210.06443