Zobrazeno 1 - 10
of 4 635
pro vyhledávání: '"Twardowski, A"'
Autor:
Goswami, Dipam, Magistri, Simone, Wang, Kai, Twardowski, Bartłomiej, Bagdanov, Andrew D., van de Weijer, Joost
Using pre-trained models has been found to reduce the effect of data heterogeneity and speed up federated learning algorithms. Recent works have investigated the use of first-order statistics and second-order statistics to aggregate local client data
Externí odkaz:
http://arxiv.org/abs/2412.14326
Autor:
Gomez-Villa, Alex, Wang, Kai, Parraga, Alejandro C., Twardowski, Bartlomiej, Malo, Jesus, Vazquez-Corral, Javier, van de Weijer, Joost
Visual illusions in humans arise when interpreting out-of-distribution stimuli: if the observer is adapted to certain statistics, perception of outliers deviates from reality. Recent studies have shown that artificial neural networks (ANNs) can also
Externí odkaz:
http://arxiv.org/abs/2412.10122
We propose an End-to-end Convolutional Activation Anomaly Analysis (E2E-CA$^3$), which is a significant extension of A$^3$ anomaly detection approach proposed by Sperl, Schulze and B\"ottinger, both in terms of architecture and scope of application.
Externí odkaz:
http://arxiv.org/abs/2411.14509
Exemplar-Free Class Incremental Learning (EFCIL) tackles the problem of training a model on a sequence of tasks without access to past data. Existing state-of-the-art methods represent classes as Gaussian distributions in the feature extractor's late
Externí odkaz:
http://arxiv.org/abs/2409.18265
Test-Time Adaptation (TTA) has recently emerged as a promising strategy for tackling the problem of machine learning model robustness under distribution shifts by adapting the model during inference without access to any labels. Because of task diffi
Externí odkaz:
http://arxiv.org/abs/2407.14231
Autor:
Gomez-Villa, Alex, Goswami, Dipam, Wang, Kai, Bagdanov, Andrew D., Twardowski, Bartlomiej, van de Weijer, Joost
Exemplar-free class-incremental learning using a backbone trained from scratch and starting from a small first task presents a significant challenge for continual representation learning. Prototype-based approaches, when continually updated, face the
Externí odkaz:
http://arxiv.org/abs/2407.08536
This paper introduces a continual learning approach named MagMax, which utilizes model merging to enable large pre-trained models to continuously learn from new data without forgetting previously acquired knowledge. Distinct from traditional continua
Externí odkaz:
http://arxiv.org/abs/2407.06322
Autor:
Goswami, Dipam, Soutif--Cormerais, Albin, Liu, Yuyang, Kamath, Sandesh, Twardowski, Bartłomiej, van de Weijer, Joost
Continual learning methods are known to suffer from catastrophic forgetting, a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore, to reduce potential drift in the feature extractor, e
Externí odkaz:
http://arxiv.org/abs/2405.19074
Few-shot class-incremental learning (FSCIL) aims to adapt the model to new classes from very few data (5 samples) without forgetting the previously learned classes. Recent works in many-shot CIL (MSCIL) (using all available training data) exploited p
Externí odkaz:
http://arxiv.org/abs/2404.06622
Autor:
Szatkowski, Filip, Yang, Fei, Twardowski, Bartłomiej, Trzciński, Tomasz, van de Weijer, Joost
Continual learning is crucial for applications in dynamic environments, where machine learning models must adapt to changing data distributions while retaining knowledge of previous tasks. Despite significant advancements, catastrophic forgetting - w
Externí odkaz:
http://arxiv.org/abs/2403.07404