Zobrazeno 1 - 10
of 31
pro vyhledávání: '"James, Seale"'
Autor:
Lin, Chi-Heng, Gao, Shangqian, Smith, James Seale, Patel, Abhishek, Tuli, Shikhar, Shen, Yilin, Jin, Hongxia, Hsu, Yen-Chang
Large Language Models (LLMs) have reshaped the landscape of artificial intelligence by demonstrating exceptional performance across various tasks. However, substantial computational requirements make their deployment challenging on devices with limit
Externí odkaz:
http://arxiv.org/abs/2408.09632
Autor:
Smith, James Seale, Valkov, Lazar, Halbe, Shaunak, Gutta, Vyshnavi, Feris, Rogerio, Kira, Zsolt, Karlinsky, Leonid
Foundation Models (FMs) have become the hallmark of modern AI, however, these models are trained on massive data, leading to financially expensive training. Updating FMs as new data becomes available is important, however, can lead to `catastrophic f
Externí odkaz:
http://arxiv.org/abs/2404.12526
Recent work has demonstrated a remarkable ability to customize text-to-image diffusion models to multiple, fine-grained concepts in a sequential (i.e., continual) manner while only providing a few example images for each concept. This setting is know
Externí odkaz:
http://arxiv.org/abs/2311.18763
Robust fine-tuning aims to achieve competitive in-distribution (ID) performance while maintaining the out-of-distribution (OOD) robustness of a pre-trained model when transferring it to a downstream task. Recently, projected gradient descent has been
Externí odkaz:
http://arxiv.org/abs/2310.19182
In this paper, we focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts over time without sharing or storing any data. The complexi
Externí odkaz:
http://arxiv.org/abs/2306.09970
Autor:
Smith, James Seale, Hsu, Yen-Chang, Zhang, Lingyu, Hua, Ting, Kira, Zsolt, Shen, Yilin, Jin, Hongxia
Recent works demonstrate a remarkable ability to customize text-to-image diffusion models while only providing a few example images. What happens if you try to customize such models using multiple, fine-grained concepts in a sequential (i.e., continu
Externí odkaz:
http://arxiv.org/abs/2304.06027
Autor:
Cascante-Bonilla, Paola, Shehada, Khaled, Smith, James Seale, Doveh, Sivan, Kim, Donghyun, Panda, Rameswar, Varol, Gül, Oliva, Aude, Ordonez, Vicente, Feris, Rogerio, Karlinsky, Leonid
Large-scale pre-trained Vision & Language (VL) models have shown remarkable performance in many applications, enabling replacing a fixed set of supported classes with zero-shot open vocabulary reasoning over (almost arbitrary) natural language prompt
Externí odkaz:
http://arxiv.org/abs/2303.17590
Autor:
Smith, James Seale, Karlinsky, Leonid, Gutta, Vyshnavi, Cascante-Bonilla, Paola, Kim, Donghyun, Arbelle, Assaf, Panda, Rameswar, Feris, Rogerio, Kira, Zsolt
Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously se
Externí odkaz:
http://arxiv.org/abs/2211.13218
Generalized Zero-Shot Learning (GZSL) aims to train a classifier that can generalize to unseen classes, using a set of attributes as auxiliary information, and the visual features extracted from a pre-trained convolutional neural network. While recen
Externí odkaz:
http://arxiv.org/abs/2211.12494
Autor:
Smith, James Seale, Cascante-Bonilla, Paola, Arbelle, Assaf, Kim, Donghyun, Panda, Rameswar, Cox, David, Yang, Diyi, Kira, Zsolt, Feris, Rogerio, Karlinsky, Leonid
Recently, large-scale pre-trained Vision-and-Language (VL) foundation models have demonstrated remarkable capabilities in many zero-shot downstream tasks, achieving competitive results for recognizing objects defined by as little as short text prompt
Externí odkaz:
http://arxiv.org/abs/2211.09790