Zobrazeno 1 - 10
of 486
pro vyhledávání: '"Ayed, Ismail"'
Autor:
Fillioux, Leo, Silva-Rodríguez, Julio, Ayed, Ismail Ben, Cournède, Paul-Henry, Vakalopoulou, Maria, Christodoulidis, Stergios, Dolz, Jose
Recent advances in self-supervision and constrastive learning have brought the performance of foundation models to unprecedented levels in a variety of tasks. Fueled by this progress, these models are becoming the prevailing approach for a wide array
Externí odkaz:
http://arxiv.org/abs/2412.06082
Words Matter: Leveraging Individual Text Embeddings for Code Generation in CLIP Test-Time Adaptation
Vision-language foundation models, such as CLIP, have shown unprecedented zero-shot performance across a wide range of tasks. Nevertheless, these models may be unreliable under distributional shifts, as their performance is significantly degraded. In
Externí odkaz:
http://arxiv.org/abs/2411.17002
Autor:
Bahri, Ali, Yazdanpanah, Moslem, Noori, Mehrdad, Oghani, Sahar Dastani, Cheraghalikhani, Milad, Osowiech, David, Beizaee, Farzad, vargas-hakim, Gustavo adolfo., Ayed, Ismail Ben, Desrosiers, Christian
Test-Time Adaptation (TTA) addresses distribution shifts during testing by adapting a pretrained model without access to source data. In this work, we propose a novel TTA approach for 3D point cloud classification, combining sampling variation with w
Externí odkaz:
http://arxiv.org/abs/2411.01116
Autor:
Shakeri, Fereshteh, Huang, Yunshi, Silva-Rodríguez, Julio, Bahig, Houda, Tang, An, Dolz, Jose, Ayed, Ismail Ben
Integrating image and text data through multi-modal learning has emerged as a new approach in medical imaging research, following its successful deployment in computer vision. While considerable efforts have been dedicated to establishing medical fou
Externí odkaz:
http://arxiv.org/abs/2409.03868
The development of vision-language models (VLMs) for histo-pathology has shown promising new usages and zero-shot performances. However, current approaches, which decompose large slides into smaller patches, focus solely on inductive classification,
Externí odkaz:
http://arxiv.org/abs/2409.01883
Autor:
Khoury, Karim El, Zanella, Maxime, Gérin, Benoît, Godelaine, Tiffanie, Macq, Benoît, Mahmoudi, Saïd, De Vleeschouwer, Christophe, Ayed, Ismail Ben
Vision-Language Models for remote sensing have shown promising uses thanks to their extensive pretraining. However, their conventional usage in zero-shot scene classification methods still involves dividing large images into patches and making indepe
Externí odkaz:
http://arxiv.org/abs/2409.00698
This paper addresses the critical issue of miscalibration in CLIP-based model adaptation, particularly in the challenging scenario of out-of-distribution (OOD) samples, which has been overlooked in the existing literature on CLIP adaptation. We empir
Externí odkaz:
http://arxiv.org/abs/2407.13588
Autor:
Noori, Mehrdad, Cheraghalikhani, Milad, Bahri, Ali, Hakim, Gustavo Adolfo Vargas, Osowiechi, David, Yazdanpanah, Moslem, Ayed, Ismail Ben, Desrosiers, Christian
Domain Generalization techniques aim to enhance model robustness by simulating novel data distributions during training, typically through various augmentation or stylization strategies. However, these methods frequently suffer from limited control o
Externí odkaz:
http://arxiv.org/abs/2407.03588
Autor:
Osowiechi, David, Noori, Mehrdad, Hakim, Gustavo Adolfo Vargas, Yazdanpanah, Moslem, Bahri, Ali, Cheraghalikhani, Milad, Dastani, Sahar, Beizaee, Farzad, Ayed, Ismail Ben, Desrosiers, Christian
Vision-Language Models (VLMs) such as CLIP have yielded unprecedented performance for zero-shot image classification, yet their generalization capability may still be seriously challenged when confronted to domain shifts. In response, we present Weig
Externí odkaz:
http://arxiv.org/abs/2406.13875
Embedders play a central role in machine learning, projecting any object into numerical representations that can, in turn, be leveraged to perform various downstream tasks. The evaluation of embedding models typically depends on domain-specific empir
Externí odkaz:
http://arxiv.org/abs/2406.07640