Zobrazeno 1 - 10
of 702
pro vyhledávání: '"Lorenz, Peter"'
Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in real-world scenarios. In recent years, many OOD detectors have been developed, and even the benchmarking has been standardized, i.e. OpenOOD. The numb
Externí odkaz:
http://arxiv.org/abs/2406.15104
Autor:
Lorenz, Peter, Heller, Annerose, Bunse, Marek, Heinrich, Miriam, Berger, Melanie, Conrad, Jürgen, Stintzing, Florian C., Kammerer, Dietmar R.
Publikováno v:
Julius-Kühn-Archiv, Vol 460, Pp 45-49 (2018)
Hypericum seeds have recently been identified as a natural source of different xanthone derivatives. Two main constituents, the tetrahydroxyxanthones THX-1 and -2, were identified in methanolic extracts of H. perforatum and H. tetrapterum by means of
Externí odkaz:
https://doaj.org/article/c96b709f1607463f88d149e7ef09ade3
In recent years, diffusion models (DMs) have drawn significant attention for their success in approximating data distributions, yielding state-of-the-art generative results. Nevertheless, the versatility of these models extends beyond their generativ
Externí odkaz:
http://arxiv.org/abs/2401.06637
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight mult
Externí odkaz:
http://arxiv.org/abs/2307.02347
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool th
Externí odkaz:
http://arxiv.org/abs/2212.06776
In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at testing time. Compared to conventional adversarial defenses, VP allows us to design universal (i.e., data-agnostic) input prompting tem
Externí odkaz:
http://arxiv.org/abs/2210.06284
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image classification networks. In its most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of
Externí odkaz:
http://arxiv.org/abs/2112.01601
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on netw
Externí odkaz:
http://arxiv.org/abs/2111.08785
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.