Zobrazeno 1 - 10
of 145
pro vyhledávání: '"Geisler Simon"'
Many applications in traffic, civil engineering, or electrical engineering revolve around edge-level signals. Such signals can be categorized as inherently directed, for example, the water flow in a pipe network, and undirected, like the diameter of
Externí odkaz:
http://arxiv.org/abs/2410.16935
Autor:
Schwinn, Leo, Geisler, Simon
Over the past decade, adversarial training has emerged as one of the few reliable methods for enhancing model robustness against adversarial attacks [Szegedy et al., 2014, Madry et al., 2018, Xhonneux et al., 2024], while many alternative approaches
Externí odkaz:
http://arxiv.org/abs/2407.15902
Existing studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. Even though Graph Transformers (GTs) surpassed Message-Passing GNNs on several benchmarks, their adversarial robustness properties are unexplored. Ho
Externí odkaz:
http://arxiv.org/abs/2407.11764
Predictions made by graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs. In an attempt to tackle this, many GNN explanation methods have emerged. Their goal is to ex
Externí odkaz:
http://arxiv.org/abs/2406.06417
Spatial Message Passing Graph Neural Networks (MPGNNs) are widely used for learning on graph-structured data. However, key limitations of l-step MPGNNs are that their "receptive field" is typically limited to the l-hop neighborhood of a node and that
Externí odkaz:
http://arxiv.org/abs/2405.19121
Autor:
Geisler, Simon, Wollschläger, Tom, Abdalla, M. H. I., Gasteiger, Johannes, Günnemann, Stephan
Current LLM alignment methods are readily broken through specifically crafted adversarial prompts. While crafting adversarial prompts using discrete optimization is highly effective, such attacks typically use more than 100,000 LLM calls. This high c
Externí odkaz:
http://arxiv.org/abs/2402.09154
It is well-known that deep learning models are vulnerable to small input perturbations. Such perturbed instances are called adversarial examples. Adversarial examples are commonly crafted to fool a model either at training time (poisoning) or test ti
Externí odkaz:
http://arxiv.org/abs/2312.05502
Autor:
Guerranti, Filippo, Yi, Zinuo, Starovoit, Anna, Kamel, Rafiq, Geisler, Simon, Günnemann, Stephan
Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks. More recently, researchers have extended the princ
Externí odkaz:
http://arxiv.org/abs/2311.17853
To facilitate reliable deployments of autonomous robots in the real world, Out-of-Distribution (OOD) detection capabilities are often required. A powerful approach for OOD detection is based on density estimation with Normalizing Flows (NFs). However
Externí odkaz:
http://arxiv.org/abs/2311.06481
Autor:
Geisler Simon, Mayersbach Peter, Becker Kathrin, Schennach Harald, Fuchs Dietmar, Gostner Johanna M.
Publikováno v:
Pteridines, Vol 26, Iss 1, Pp 31-36 (2015)
Formation of neopterin, a biomarker of the activated human immune system, is linked with tryptophan (TRP) and phenylalanine (PHE) metabolism. To obtain normal values, in this study, serum concentrations of neopterin as well as of TRP, PHE and their r
Externí odkaz:
https://doaj.org/article/50bb0794815c4fe29d42b297f8c12f22