Zobrazeno 1 - 10
of 488
pro vyhledávání: '"Hospedales, Timothy"'
Autor:
Tan, Fuwen, Lee, Royson, Dudziak, Łukasz, Hu, Shell Xu, Bhattacharya, Sourav, Hospedales, Timothy, Tzimiropoulos, Georgios, Martinez, Brais
Large language models (LLMs) have revolutionized language processing, delivering outstanding results across multiple applications. However, deploying LLMs on edge devices poses several challenges with respect to memory, energy, and compute costs, lim
Externí odkaz:
http://arxiv.org/abs/2408.13933
The advancement of large language models (LLMs) has significantly broadened the scope of applications in natural language processing, with multi-modal LLMs extending these capabilities to integrate and interpret visual data. However, existing benchma
Externí odkaz:
http://arxiv.org/abs/2406.12742
Large-scale text-to-image diffusion models excel in generating high-quality images from textual inputs, yet concerns arise as research indicates their tendency to memorize and replicate training data, raising We also addressed the issue of memorizati
Externí odkaz:
http://arxiv.org/abs/2406.18566
Diffusion models show a remarkable ability in generating images that closely mirror the training distribution. However, these models are prone to training data memorization, leading to significant privacy, ethical, and legal concerns, particularly in
Externí odkaz:
http://arxiv.org/abs/2405.19458
While large-scale text-to-image diffusion models have demonstrated impressive image-generation capabilities, there are significant concerns about their potential misuse for generating unsafe content, violating copyright, and perpetuating societal bia
Externí odkaz:
http://arxiv.org/abs/2405.19237
Autor:
Lee, Royson, Fernandez-Marques, Javier, Hu, Shell Xu, Li, Da, Laskaridis, Stefanos, Dudziak, Łukasz, Hospedales, Timothy, Huszár, Ferenc, Lane, Nicholas D.
Federated learning (FL) has enabled distributed learning of a model across multiple clients in a privacy-preserving manner. One of the main challenges of FL is to accommodate clients with varying hardware capacities; clients have differing compute an
Externí odkaz:
http://arxiv.org/abs/2405.14791
Large language models (LLMs) famously exhibit emergent in-context learning (ICL) -- the ability to rapidly adapt to new tasks using few-shot examples provided as a prompt, without updating the model's weights. Built on top of LLMs, vision large langu
Externí odkaz:
http://arxiv.org/abs/2403.13164
Autor:
Bandyopadhyay, Hmrishav, Bhunia, Ayan Kumar, Chowdhury, Pinaki Nath, Sain, Aneeshan, Xiang, Tao, Hospedales, Timothy, Song, Yi-Zhe
We propose SketchINR, to advance the representation of vector sketches with implicit neural models. A variable length vector sketch is compressed into a latent space of fixed dimension that implicitly encodes the underlying shape as a function of tim
Externí odkaz:
http://arxiv.org/abs/2403.09344
Current vision large language models (VLLMs) exhibit remarkable capabilities yet are prone to generate harmful content and are vulnerable to even the simplest jailbreaking attacks. Our initial analysis finds that this is due to the presence of harmfu
Externí odkaz:
http://arxiv.org/abs/2402.02207
In recent years, self-supervised learning has excelled for its capacity to learn robust feature representations from unlabelled data. Networks pretrained through self-supervision serve as effective feature extractors for downstream tasks, including F
Externí odkaz:
http://arxiv.org/abs/2402.01274