Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Amara, Kenza"'
Autor:
Amara, Kenza, Klein, Lukas, Lüth, Carsten, Jäger, Paul, Strobelt, Hendrik, El-Assady, Mennatallah
The various limitations of Generative AI, such as hallucinations and model failures, have made it crucial to understand the role of different modalities in Visual Language Model (VLM) predictions. Our work investigates how the integration of informat
Externí odkaz:
http://arxiv.org/abs/2410.01690
Autor:
Boyle, Alan, Gupta, Isha, Hönig, Sebastian, Mautner, Lukas, Amara, Kenza, Cheng, Furui, El-Assady, Mennatallah
As language models have become increasingly successful at a wide array of tasks, different prompt engineering methods have been developed alongside them in order to adapt these models to new tasks. One of them is Tree-of-Thoughts (ToT), a prompting s
Externí odkaz:
http://arxiv.org/abs/2409.00413
The necessity for interpretability in natural language processing (NLP) has risen alongside the growing prominence of large language models. Among the myriad tasks within NLP, text generation stands out as a primary objective of autoregressive models
Externí odkaz:
http://arxiv.org/abs/2405.08468
To harness the power of large language models in safety-critical domains, we need to ensure the explainability of their predictions. However, despite the significant attention to model interpretability, there remains an unexplored domain in explainin
Externí odkaz:
http://arxiv.org/abs/2402.09259
Power grids are critical infrastructures of paramount importance to modern society and, therefore, engineered to operate under diverse conditions and failures. The ongoing energy transition poses new challenges for the decision-makers and system oper
Externí odkaz:
http://arxiv.org/abs/2402.02827
Graph Neural Networks (GNNs) achieve state-of-the-art performance in various graph-related tasks. However, the black-box nature often limits their interpretability and trustworthiness. Numerous explainability methods have been proposed to uncover the
Externí odkaz:
http://arxiv.org/abs/2311.05764
Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions. However, it is not clear yet how to evaluate the correctness o
Externí odkaz:
http://arxiv.org/abs/2309.16223
Autor:
Amara, Kenza, Ying, Rex, Zhang, Zitao, Han, Zhihao, Shan, Yinan, Brandes, Ulrik, Schemm, Sebastian, Zhang, Ce
As one of the most popular machine learning models today, graph neural networks (GNNs) have attracted intense interest recently, and so does their explainability. Users are increasingly interested in a better understanding of GNN models and their out
Externí odkaz:
http://arxiv.org/abs/2206.09677
Autor:
Reiersen, Gyri, Dao, David, Lütjens, Björn, Klemmer, Konstantin, Amara, Kenza, Steinegger, Attila, Zhang, Ce, Zhu, Xiaoxiang
Forest biomass is a key influence for future climate, and the world urgently needs highly scalable financing schemes, such as carbon offsetting certifications, to protect and restore forests. Current manual forest carbon stock inventory methods of me
Externí odkaz:
http://arxiv.org/abs/2201.11192
Modern approaches for fast retrieval of similar vectors on billion-scaled datasets rely on compressed-domain approaches such as binary sketches or product quantization. These methods minimize a certain loss, typically the mean squared error or other
Externí odkaz:
http://arxiv.org/abs/2112.09568