Zobrazeno 1 - 10
of 29
pro vyhledávání: '"Bohdal, Ondrej"'
In this work, we present compelling evidence that controlling model capacity during fine-tuning can effectively mitigate memorization in diffusion models. Specifically, we demonstrate that adopting Parameter-Efficient Fine-Tuning (PEFT) within the pr
Externí odkaz:
http://arxiv.org/abs/2410.22149
Large-scale text-to-image diffusion models excel in generating high-quality images from textual inputs, yet concerns arise as research indicates their tendency to memorize and replicate training data, raising We also addressed the issue of memorizati
Externí odkaz:
http://arxiv.org/abs/2406.18566
Diffusion models excel in generating images that closely resemble their training data but are also susceptible to data memorization, raising privacy, ethical, and legal concerns, particularly in sensitive domains such as medical imaging. We hypothesi
Externí odkaz:
http://arxiv.org/abs/2405.19458
Large language models (LLMs) famously exhibit emergent in-context learning (ICL) -- the ability to rapidly adapt to new tasks using few-shot examples provided as a prompt, without updating the model's weights. Built on top of LLMs, vision large langu
Externí odkaz:
http://arxiv.org/abs/2403.13164
Current vision large language models (VLLMs) exhibit remarkable capabilities yet are prone to generate harmful content and are vulnerable to even the simplest jailbreaking attacks. Our initial analysis finds that this is due to the presence of harmfu
Externí odkaz:
http://arxiv.org/abs/2402.02207
Training models with robust group fairness properties is crucial in ethically sensitive application areas such as medical diagnosis. Despite the growing body of work aiming to minimise demographic bias in AI, this problem remains challenging. A key r
Externí odkaz:
http://arxiv.org/abs/2310.05055
Performance of a pre-trained semantic segmentation model is likely to substantially decrease on data from a new domain. We show a pre-trained model can be adapted to unlabelled target domain data by calculating soft-label prototypes under the domain
Externí odkaz:
http://arxiv.org/abs/2307.10842
Source-free domain adaptation has become popular because of its practical usefulness and no need to access source data. However, the adaptation process still takes a considerable amount of time and is predominantly based on optimization that relies o
Externí odkaz:
http://arxiv.org/abs/2307.10787
Enhancing the generalisation abilities of neural networks (NNs) through integrating noise such as MixUp or Dropout during training has emerged as a powerful and adaptable technique. Despite the proven efficacy of noise in NN training, there is no con
Externí odkaz:
http://arxiv.org/abs/2306.17630
Autor:
Bohdal, Ondrej, Tian, Yinbing, Zong, Yongshuo, Chavhan, Ruchika, Li, Da, Gouk, Henry, Guo, Li, Hospedales, Timothy
Meta-learning and other approaches to few-shot learning are widely studied for image recognition, and are increasingly applied to other vision tasks such as pose estimation and dense prediction. This naturally raises the question of whether there is
Externí odkaz:
http://arxiv.org/abs/2305.07625