Zobrazeno 1 - 10
of 442
pro vyhledávání: '"Araujo, Alexandre"'
Autor:
Ghazanfari, Sara, Araujo, Alexandre, Krishnamurthy, Prashanth, Garg, Siddharth, Khorrami, Farshad
Multi-modal Large Language Models (MLLMs) have recently exhibited impressive general-purpose capabilities by leveraging vision foundation models to encode the core concepts of images into representations. These are then combined with instructions and
Externí odkaz:
http://arxiv.org/abs/2410.02080
Large Language Models (LLMs) have surged in popularity in recent months, but they have demonstrated concerning capabilities to generate harmful content when manipulated. While techniques like safety fine-tuning aim to minimize harmful use, recent wor
Externí odkaz:
http://arxiv.org/abs/2402.09674
Autor:
Pauli, Patricia, Havens, Aaron, Araujo, Alexandre, Garg, Siddharth, Khorrami, Farshad, Allgöwer, Frank, Hu, Bin
Recently, semidefinite programming (SDP) techniques have shown great promise in providing accurate Lipschitz bounds for neural networks. Specifically, the LipSDP approach (Fazlyab et al., 2019) has received much attention and provides the least conse
Externí odkaz:
http://arxiv.org/abs/2401.14033
Autor:
Ghazanfari, Sara, Araujo, Alexandre, Krishnamurthy, Prashanth, Khorrami, Farshad, Garg, Siddharth
Recent years have seen growing interest in developing and applying perceptual similarity metrics. Research has shown the superiority of perceptual metrics over pixel-wise metrics in aligning with human perception and serving as a proxy for the human
Externí odkaz:
http://arxiv.org/abs/2310.18274
Autor:
Laousy, Othmane, Araujo, Alexandre, Chassagnon, Guillaume, Paragios, Nikos, Revel, Marie-Pierre, Vakalopoulou, Maria
In medical imaging, segmentation models have known a significant improvement in the past decade and are now used daily in clinical practice. However, similar to classification models, segmentation models are affected by adversarial attacks. In a safe
Externí odkaz:
http://arxiv.org/abs/2310.03664
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks. The certified radius in this context is a crucial indicator of the robustness of models. However how to de
Externí odkaz:
http://arxiv.org/abs/2309.16883
Autor:
Ghazanfari, Sara, Garg, Siddharth, Krishnamurthy, Prashanth, Khorrami, Farshad, Araujo, Alexandre
Similarity metrics have played a significant role in computer vision to capture the underlying semantics of images. In recent years, advanced similarity metrics, such as the Learned Perceptual Image Patch Similarity (LPIPS), have emerged. These metri
Externí odkaz:
http://arxiv.org/abs/2307.15157
Autor:
Laousy, Othmane, Araujo, Alexandre, Chassagnon, Guillaume, Revel, Marie-Pierre, Garg, Siddharth, Khorrami, Farshad, Vakalopoulou, Maria
The robustness of image segmentation has been an important research topic in the past few years as segmentation models have reached production-level accuracy. However, like classification models, segmentation models can be vulnerable to adversarial p
Externí odkaz:
http://arxiv.org/abs/2306.09949
Neural networks are known to be susceptible to adversarial samples: small variations of natural examples crafted to deliberately mislead the models. While they can be easily generated using gradient-based techniques in digital and physical scenarios,
Externí odkaz:
http://arxiv.org/abs/2305.16494