Zobrazeno 1 - 10
of 537
pro vyhledávání: '"VAN DE WEIJER, JOOST"'
Autor:
Goswami, Dipam, Magistri, Simone, Wang, Kai, Twardowski, Bartłomiej, Bagdanov, Andrew D., van de Weijer, Joost
Using pre-trained models has been found to reduce the effect of data heterogeneity and speed up federated learning algorithms. Recent works have investigated the use of first-order statistics and second-order statistics to aggregate local client data
Externí odkaz:
http://arxiv.org/abs/2412.14326
Autor:
Gomez-Villa, Alex, Wang, Kai, Parraga, Alejandro C., Twardowski, Bartlomiej, Malo, Jesus, Vazquez-Corral, Javier, van de Weijer, Joost
Visual illusions in humans arise when interpreting out-of-distribution stimuli: if the observer is adapted to certain statistics, perception of outliers deviates from reality. Recent studies have shown that artificial neural networks (ANNs) can also
Externí odkaz:
http://arxiv.org/abs/2412.10122
Autor:
Hu, Taihang, Li, Linxuan, van de Weijer, Joost, Gao, Hongcheng, Khan, Fahad Shahbaz, Yang, Jian, Cheng, Ming-Ming, Wang, Kai, Wang, Yaxing
Although text-to-image (T2I) models exhibit remarkable generation capabilities, they frequently fail to accurately bind semantically related objects or attributes in the input prompts; a challenge termed semantic binding. Previous approaches either i
Externí odkaz:
http://arxiv.org/abs/2411.07132
With the advent of large pre-trained vision-language models such as CLIP, prompt learning methods aim to enhance the transferability of the CLIP model. They learn the prompt given few samples from the downstream task given the specific class names as
Externí odkaz:
http://arxiv.org/abs/2410.22317
Autor:
Laria, Héctor, Gomez-Villa, Alex, Marouf, Imad Eddine, Wang, Kai, Raducanu, Bogdan, van de Weijer, Joost
Recent advances in diffusion models have significantly enhanced image generation capabilities. However, customizing these models with new classes often leads to unintended consequences that compromise their reliability. We introduce the concept of op
Externí odkaz:
http://arxiv.org/abs/2410.14159
Deep neural networks (DNNs) excel on fixed datasets but struggle with incremental and shifting data in real-world scenarios. Continual learning addresses this challenge by allowing models to learn from new data while retaining previously learned know
Externí odkaz:
http://arxiv.org/abs/2408.01076
Autor:
Gomez-Villa, Alex, Goswami, Dipam, Wang, Kai, Bagdanov, Andrew D., Twardowski, Bartlomiej, van de Weijer, Joost
Exemplar-free class-incremental learning using a backbone trained from scratch and starting from a small first task presents a significant challenge for continual representation learning. Prototype-based approaches, when continually updated, face the
Externí odkaz:
http://arxiv.org/abs/2407.08536
Text-to-Image (T2I) generation has made significant advancements with the advent of diffusion models. These models exhibit remarkable abilities to produce images based on textual prompts. Current T2I models allow users to specify object colors using
Externí odkaz:
http://arxiv.org/abs/2407.07197
Recent research identified a temporary performance drop on previously learned tasks when transitioning to a new one. This drop is called the stability gap and has great consequences for continual learning: it complicates the direct employment of cont
Externí odkaz:
http://arxiv.org/abs/2406.05114
Autor:
Goswami, Dipam, Soutif--Cormerais, Albin, Liu, Yuyang, Kamath, Sandesh, Twardowski, Bartłomiej, van de Weijer, Joost
Continual learning methods are known to suffer from catastrophic forgetting, a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore, to reduce potential drift in the feature extractor, e
Externí odkaz:
http://arxiv.org/abs/2405.19074