Zobrazeno 1 - 10
of 296
pro vyhledávání: '"Ayed, Ismail Ben"'
Autor:
Bahri, Ali, Yazdanpanah, Moslem, Noori, Mehrdad, Oghani, Sahar Dastani, Cheraghalikhani, Milad, Osowiech, David, Beizaee, Farzad, vargas-hakim, Gustavo adolfo., Ayed, Ismail Ben, Desrosiers, Christian
Test-Time Adaptation (TTA) addresses distribution shifts during testing by adapting a pretrained model without access to source data. In this work, we propose a novel TTA approach for 3D point cloud classification, combining sampling variation with w
Externí odkaz:
http://arxiv.org/abs/2411.01116
Autor:
Shakeri, Fereshteh, Huang, Yunshi, Silva-Rodríguez, Julio, Bahig, Houda, Tang, An, Dolz, Jose, Ayed, Ismail Ben
Integrating image and text data through multi-modal learning has emerged as a new approach in medical imaging research, following its successful deployment in computer vision. While considerable efforts have been dedicated to establishing medical fou
Externí odkaz:
http://arxiv.org/abs/2409.03868
The development of vision-language models (VLMs) for histo-pathology has shown promising new usages and zero-shot performances. However, current approaches, which decompose large slides into smaller patches, focus solely on inductive classification,
Externí odkaz:
http://arxiv.org/abs/2409.01883
Autor:
Khoury, Karim El, Zanella, Maxime, Gérin, Benoît, Godelaine, Tiffanie, Macq, Benoît, Mahmoudi, Saïd, De Vleeschouwer, Christophe, Ayed, Ismail Ben
Vision-Language Models for remote sensing have shown promising uses thanks to their extensive pretraining. However, their conventional usage in zero-shot scene classification methods still involves dividing large images into patches and making indepe
Externí odkaz:
http://arxiv.org/abs/2409.00698
This paper addresses the critical issue of miscalibration in CLIP-based model adaptation, particularly in the challenging scenario of out-of-distribution (OOD) samples, which has been overlooked in the existing literature on CLIP adaptation. We empir
Externí odkaz:
http://arxiv.org/abs/2407.13588
Autor:
Noori, Mehrdad, Cheraghalikhani, Milad, Bahri, Ali, Hakim, Gustavo Adolfo Vargas, Osowiechi, David, Yazdanpanah, Moslem, Ayed, Ismail Ben, Desrosiers, Christian
Domain Generalization techniques aim to enhance model robustness by simulating novel data distributions during training, typically through various augmentation or stylization strategies. However, these methods frequently suffer from limited control o
Externí odkaz:
http://arxiv.org/abs/2407.03588
Autor:
Osowiechi, David, Noori, Mehrdad, Hakim, Gustavo Adolfo Vargas, Yazdanpanah, Moslem, Bahri, Ali, Cheraghalikhani, Milad, Dastani, Sahar, Beizaee, Farzad, Ayed, Ismail Ben, Desrosiers, Christian
Vision-Language Models (VLMs) such as CLIP have yielded unprecedented performance for zero-shot image classification, yet their generalization capability may still be seriously challenged when confronted to domain shifts. In response, we present Weig
Externí odkaz:
http://arxiv.org/abs/2406.13875
Embedders play a central role in machine learning, projecting any object into numerical representations that can, in turn, be leveraged to perform various downstream tasks. The evaluation of embedding models typically depends on domain-specific empir
Externí odkaz:
http://arxiv.org/abs/2406.07640
Transduction is a powerful paradigm that leverages the structure of unlabeled data to boost predictive accuracy. We present TransCLIP, a novel and computationally efficient transductive approach designed for Vision-Language Models (VLMs). TransCLIP i
Externí odkaz:
http://arxiv.org/abs/2406.01837
Autor:
Zanella, Maxime, Ayed, Ismail Ben
Recent progress in the few-shot adaptation of Vision-Language Models (VLMs) has further pushed their generalization capabilities, at the expense of just a few labeled samples within the target downstream task. However, this promising, already quite a
Externí odkaz:
http://arxiv.org/abs/2405.18541