Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Hammoud, Hasan Abed Al Kader"'
Autor:
Hammoud, Hasan Abed Al Kader, Michieli, Umberto, Pizzati, Fabio, Torr, Philip, Bibi, Adel, Ghanem, Bernard, Ozay, Mete
Merging Large Language Models (LLMs) is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety ali
Externí odkaz:
http://arxiv.org/abs/2406.14563
Autor:
Hammoud, Hasan Abed Al Kader, Das, Tuhin, Pizzati, Fabio, Torr, Philip, Bibi, Adel, Ghanem, Bernard
We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasin
Externí odkaz:
http://arxiv.org/abs/2403.13808
Autor:
Hammoud, Hasan Abed Al Kader, Itani, Hani, Pizzati, Fabio, Torr, Philip, Bibi, Adel, Ghanem, Bernard
We present SynthCLIP, a CLIP model trained on entirely synthetic text-image pairs. Leveraging recent text-to-image (TTI) networks and large language models (LLM), we generate synthetic datasets of images and corresponding captions at scale, with no h
Externí odkaz:
http://arxiv.org/abs/2402.01832
Autor:
Prabhu, Ameya, Hammoud, Hasan Abed Al Kader, Lim, Ser-Nam, Ghanem, Bernard, Torr, Philip H. S., Bibi, Adel
Continual Learning (CL) often relies on the availability of extensive annotated datasets, an assumption that is unrealistically time-consuming and costly in practice. We explore a novel paradigm termed name-only continual learning where time and cost
Externí odkaz:
http://arxiv.org/abs/2311.11293
Autor:
Zhuge, Mingchen, Liu, Haozhe, Faccio, Francesco, Ashley, Dylan R., Csordás, Róbert, Gopalakrishnan, Anand, Hamdi, Abdullah, Hammoud, Hasan Abed Al Kader, Herrmann, Vincent, Irie, Kazuki, Kirsch, Louis, Li, Bing, Li, Guohao, Liu, Shuming, Mai, Jinjie, Piękos, Piotr, Ramesh, Aditya, Schlag, Imanol, Shi, Weimin, Stanić, Aleksandar, Wang, Wenyi, Wang, Yuhui, Xu, Mengmeng, Fan, Deng-Ping, Ghanem, Bernard, Schmidhuber, Jürgen
Both Minsky's "society of mind" and Schmidhuber's "learning to think" inspire diverse societies of large multimodal neural networks (NNs) that solve problems by interviewing each other in a "mindstorm." Recent implementations of NN-based societies of
Externí odkaz:
http://arxiv.org/abs/2305.17066
Autor:
Hammoud, Hasan Abed Al Kader, Prabhu, Ameya, Lim, Ser-Nam, Torr, Philip H. S., Bibi, Adel, Ghanem, Bernard
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples. However, we show that this metric
Externí odkaz:
http://arxiv.org/abs/2305.09275
Autor:
Hammoud, Hasan Abed Al Kader
Deep Neural Networks (DNNs) are ubiquitous and span a variety of applications ranging from image classification and facial recognition to medical image analysis and real-time object detection. As DNN models become more sophisticated and complex, the
Externí odkaz:
http://hdl.handle.net/10754/676301
The rapid advancement of chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explor
Externí odkaz:
http://arxiv.org/abs/2303.17760
In this paper we investigate the frequency sensitivity of Deep Neural Networks (DNNs) when presented with clean samples versus poisoned samples. Our analysis shows significant disparities in frequency sensitivity between these two types of samples. B
Externí odkaz:
http://arxiv.org/abs/2303.13211
Autor:
Prabhu, Ameya, Hammoud, Hasan Abed Al Kader, Dokania, Puneet, Torr, Philip H. S., Lim, Ser-Nam, Ghanem, Bernard, Bibi, Adel
Continual Learning (CL) aims to sequentially train models on streams of incoming data that vary in distribution by preserving previous knowledge while adapting to new data. Current CL literature focuses on restricted access to previously seen data, w
Externí odkaz:
http://arxiv.org/abs/2303.11165