Zobrazeno 1 - 10
of 1 152
pro vyhledávání: '"Raducanu, A."'
The growing demand for customized visual content has led to the rise of personalized text-to-image (T2I) diffusion models. Despite their remarkable potential, they pose significant privacy risk when misused for malicious purposes. In this paper, we p
Externí odkaz:
http://arxiv.org/abs/2411.16437
With the advent of large pre-trained vision-language models such as CLIP, prompt learning methods aim to enhance the transferability of the CLIP model. They learn the prompt given few samples from the downstream task given the specific class names as
Externí odkaz:
http://arxiv.org/abs/2410.22317
Autor:
Laria, Héctor, Gomez-Villa, Alex, Marouf, Imad Eddine, Wang, Kai, Raducanu, Bogdan, van de Weijer, Joost
Recent advances in diffusion models have significantly enhanced image generation capabilities. However, customizing these models with new classes often leads to unintended consequences that compromise their reliability. We introduce the concept of op
Externí odkaz:
http://arxiv.org/abs/2410.14159
Recent research identified a temporary performance drop on previously learned tasks when transitioning to a new one. This drop is called the stability gap and has great consequences for continual learning: it complicates the direct employment of cont
Externí odkaz:
http://arxiv.org/abs/2406.05114
Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions. Evidential deep learning stands out achieving remarkable performance in detecting out-of-distribution (OOD)
Externí odkaz:
http://arxiv.org/abs/2309.02995
Autor:
Irimescu, Raluca Elena1 (AUTHOR) raluca.irimescu@stud.sim.upb.ro, Raducanu, Doina1 (AUTHOR) doina.raducanu@upb.ro, Nocivin, Anna2 (AUTHOR) anocivin@univ-ovidius.ro, Cojocaru, Elisabeta Mirela1 (AUTHOR) dan.cojocaru@upb.ro, Cojocaru, Vasile Danut1 (AUTHOR) nicoleta.zarnescu@upb.ro, Zarnescu-Ivan, Nicoleta1 (AUTHOR)
Publikováno v:
Materials (1996-1944). Dec2024, Vol. 17 Issue 23, p5828. 16p.
In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM). Our work takes
Externí odkaz:
http://arxiv.org/abs/2203.13167
GANs have matured in recent years and are able to generate high-resolution, realistic images. However, the computational resources and the data required for the training of high-quality GANs are enormous, and the study of transfer learning of these m
Externí odkaz:
http://arxiv.org/abs/2112.02219
Active learning aims to reduce the labeling effort that is required to train algorithms by learning an acquisition function selecting the most relevant data for which a label should be requested from a large unlabeled data pool. Active learning is ge
Externí odkaz:
http://arxiv.org/abs/2110.04543
Active learning is a paradigm aimed at reducing the annotation effort by training the model on actively selected informative and/or representative samples. Another paradigm to reduce the annotation effort is self-training that learns from a large amo
Externí odkaz:
http://arxiv.org/abs/2108.11458