Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Yazdanpanah, Moslem"'
Autor:
Bahri, Ali, Yazdanpanah, Moslem, Noori, Mehrdad, Oghani, Sahar Dastani, Cheraghalikhani, Milad, Osowiech, David, Beizaee, Farzad, vargas-hakim, Gustavo adolfo., Ayed, Ismail Ben, Desrosiers, Christian
Test-Time Adaptation (TTA) addresses distribution shifts during testing by adapting a pretrained model without access to source data. In this work, we propose a novel TTA approach for 3D point cloud classification, combining sampling variation with w
Externí odkaz:
http://arxiv.org/abs/2411.01116
Autor:
Noori, Mehrdad, Cheraghalikhani, Milad, Bahri, Ali, Hakim, Gustavo Adolfo Vargas, Osowiechi, David, Yazdanpanah, Moslem, Ayed, Ismail Ben, Desrosiers, Christian
Domain Generalization techniques aim to enhance model robustness by simulating novel data distributions during training, typically through various augmentation or stylization strategies. However, these methods frequently suffer from limited control o
Externí odkaz:
http://arxiv.org/abs/2407.03588
Autor:
Osowiechi, David, Noori, Mehrdad, Hakim, Gustavo Adolfo Vargas, Yazdanpanah, Moslem, Bahri, Ali, Cheraghalikhani, Milad, Dastani, Sahar, Beizaee, Farzad, Ayed, Ismail Ben, Desrosiers, Christian
Vision-Language Models (VLMs) such as CLIP have yielded unprecedented performance for zero-shot image classification, yet their generalization capability may still be seriously challenged when confronted to domain shifts. In response, we present Weig
Externí odkaz:
http://arxiv.org/abs/2406.13875
Autor:
Bahri, Ali, Yazdanpanah, Moslem, Noori, Mehrdad, Cheraghalikhani, Milad, Hakim, Gustavo Adolfo Vargas, Osowiechi, David, Beizaee, Farzad, Ayed, Ismail Ben, Desrosiers, Christian
We introduce a pioneering approach to self-supervised learning for point clouds, employing a geometrically informed mask selection strategy called GeoMask3D (GM3D) to boost the efficiency of Masked Auto Encoders (MAE). Unlike the conventional method
Externí odkaz:
http://arxiv.org/abs/2405.12419
Autor:
Hakim, Gustavo Adolfo Vargas, Osowiechi, David, Noori, Mehrdad, Cheraghalikhani, Milad, Bahri, Ali, Yazdanpanah, Moslem, Ayed, Ismail Ben, Desrosiers, Christian
Pre-trained vision-language models (VLMs), exemplified by CLIP, demonstrate remarkable adaptability across zero-shot classification tasks without additional training. However, their performance diminishes in the presence of domain shifts. In this stu
Externí odkaz:
http://arxiv.org/abs/2405.00754
Autor:
Osowiechi, David, Hakim, Gustavo A. Vargas, Noori, Mehrdad, Cheraghalikhani, Milad, Bahri, Ali, Yazdanpanah, Moslem, Ayed, Ismail Ben, Desrosiers, Christian
Despite their exceptional performance in vision tasks, deep learning models often struggle when faced with domain shifts during testing. Test-Time Training (TTT) methods have recently gained popularity by their ability to enhance the robustness of mo
Externí odkaz:
http://arxiv.org/abs/2404.08392