Zobrazeno 1 - 10
of 2 492
pro vyhledávání: '"de Weijer A"'
Autor:
Laria, Héctor, Gomez-Villa, Alex, Marouf, Imad Eddine, Wang, Kai, Raducanu, Bogdan, van de Weijer, Joost
Recent advances in diffusion models have significantly enhanced image generation capabilities. However, customizing these models with new classes often leads to unintended consequences that compromise their reliability. We introduce the concept of op
Externí odkaz:
http://arxiv.org/abs/2410.14159
Deep neural networks (DNNs) excel on fixed datasets but struggle with incremental and shifting data in real-world scenarios. Continual learning addresses this challenge by allowing models to learn from new data while retaining previously learned know
Externí odkaz:
http://arxiv.org/abs/2408.01076
Autor:
Gomez-Villa, Alex, Goswami, Dipam, Wang, Kai, Bagdanov, Andrew D., Twardowski, Bartlomiej, van de Weijer, Joost
Exemplar-free class-incremental learning using a backbone trained from scratch and starting from a small first task presents a significant challenge for continual representation learning. Prototype-based approaches, when continually updated, face the
Externí odkaz:
http://arxiv.org/abs/2407.08536
Text-to-Image (T2I) generation has made significant advancements with the advent of diffusion models. These models exhibit remarkable abilities to produce images based on textual prompts. Current T2I models allow users to specify object colors using
Externí odkaz:
http://arxiv.org/abs/2407.07197
Recent research identified a temporary performance drop on previously learned tasks when transitioning to a new one. This drop is called the stability gap and has great consequences for continual learning: it complicates the direct employment of cont
Externí odkaz:
http://arxiv.org/abs/2406.05114
Autor:
Goswami, Dipam, Soutif--Cormerais, Albin, Liu, Yuyang, Kamath, Sandesh, Twardowski, Bartłomiej, van de Weijer, Joost
Continual learning methods are known to suffer from catastrophic forgetting, a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore, to reduce potential drift in the feature extractor, e
Externí odkaz:
http://arxiv.org/abs/2405.19074
Broad, open source availability of large pretrained foundation models on the internet through platforms such as HuggingFace has taken the world of practical deep learning by storm. A classical pipeline for neural network training now typically consis
Externí odkaz:
http://arxiv.org/abs/2405.18069
Large-scale Text-to-Image (T2I) diffusion models demonstrate significant generation capabilities based on textual prompts. Based on the T2I diffusion models, text-guided image editing research aims to empower users to manipulate generated images by a
Externí odkaz:
http://arxiv.org/abs/2405.01496
Few-shot class-incremental learning (FSCIL) aims to adapt the model to new classes from very few data (5 samples) without forgetting the previously learned classes. Recent works in many-shot CIL (MSCIL) (using all available training data) exploited p
Externí odkaz:
http://arxiv.org/abs/2404.06622
Autor:
Szatkowski, Filip, Yang, Fei, Twardowski, Bartłomiej, Trzciński, Tomasz, van de Weijer, Joost
Continual learning is crucial for applications in dynamic environments, where machine learning models must adapt to changing data distributions while retaining knowledge of previous tasks. Despite significant advancements, catastrophic forgetting - w
Externí odkaz:
http://arxiv.org/abs/2403.07404