Zobrazeno 1 - 10
of 126
pro vyhledávání: '"Rodriguez, Pau"'
Autor:
Suau, Xavier, Delobelle, Pieter, Metcalf, Katherine, Joulin, Armand, Apostoloff, Nicholas, Zappella, Luca, Rodríguez, Pau
An important issue with Large Language Models (LLMs) is their undesired ability to generate toxic language. In this work, we show that the neurons responsible for toxicity can be determined by their power to discriminate toxic sentences, and that tox
Externí odkaz:
http://arxiv.org/abs/2407.12824
Autor:
Rodriguez, Juan A., Agarwal, Shubham, Laradji, Issam H., Rodriguez, Pau, Vazquez, David, Pal, Christopher, Pedersoli, Marco
Scalable Vector Graphics (SVGs) have become integral in modern image rendering applications due to their infinite scalability in resolution, versatile usability, and editing capabilities. SVGs are particularly popular in the fields of web development
Externí odkaz:
http://arxiv.org/abs/2312.11556
Diffusion models are powerful generative models that achieve state-of-the-art performance in image synthesis. However, training them demands substantial amounts of data and computational resources. Continual learning would allow for incrementally lea
Externí odkaz:
http://arxiv.org/abs/2311.14028
A key aspect of human intelligence is the ability to imagine -- composing learned concepts in novel ways -- to make sense of new scenarios. Such capacity is not yet attained for machine learning systems. In this work, in the context of visual reasoni
Externí odkaz:
http://arxiv.org/abs/2310.18807
Empirical risk minimization (ERM) is sensitive to spurious correlations in the training data, which poses a significant risk when deploying systems trained under this paradigm in high-stake applications. While the existing literature focuses on maxim
Externí odkaz:
http://arxiv.org/abs/2310.18555
What distinguishes robust models from non-robust ones? This question has gained traction with the appearance of large-scale multimodal models, such as CLIP. These models have demonstrated unprecedented robustness with respect to natural distribution
Externí odkaz:
http://arxiv.org/abs/2310.13040
Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passes are executed
Externí odkaz:
http://arxiv.org/abs/2309.16318
Autor:
Rodríguez-Gálvez, Borja, Blaas, Arno, Rodríguez, Pau, Goliński, Adam, Suau, Xavier, Ramapuram, Jason, Busbridge, Dan, Zappella, Luca
The mechanisms behind the success of multi-view self-supervised learning (MVSSL) are not yet fully understood. Contrastive MVSSL methods have been studied through the lens of InfoNCE, a lower bound of the Mutual Information (MI). However, the relatio
Externí odkaz:
http://arxiv.org/abs/2307.10907
Autor:
Lacoste, Alexandre, Lehmann, Nils, Rodriguez, Pau, Sherwin, Evan David, Kerner, Hannah, Lütjens, Björn, Irvin, Jeremy Andrew, Dao, David, Alemohammad, Hamed, Drouin, Alexandre, Gunturkun, Mehmet, Huang, Gabriel, Vazquez, David, Newman, Dava, Bengio, Yoshua, Ermon, Stefano, Zhu, Xiao Xiang
Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks. Such models, recently coined foundation models, have b
Externí odkaz:
http://arxiv.org/abs/2306.03831
The generative modeling landscape has experienced tremendous growth in recent years, particularly in generating natural images and art. Recent techniques have shown impressive potential in creating complex visual compositions while delivering impress
Externí odkaz:
http://arxiv.org/abs/2306.00800