Zobrazeno 1 - 10
of 81
pro vyhledávání: '"Saffari, Amir"'
Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks, based on their internal knowledge stored in parameters during pre-training. However, such internalized knowledge might be insufficient and incorrec
Externí odkaz:
http://arxiv.org/abs/2306.04136
A bottleneck to developing Semantic Parsing (SP) models is the need for a large volume of human-labeled training data. Given the complexity and cost of human annotation for SP, labeled data is often scarce, particularly in multilingual settings. Larg
Externí odkaz:
http://arxiv.org/abs/2210.07074
We introduce Mintaka, a complex, natural, and multilingual dataset designed for experimenting with end-to-end question-answering models. Mintaka is composed of 20,000 question-answer pairs collected in English, annotated with Wikidata entities, and t
Externí odkaz:
http://arxiv.org/abs/2210.01613
Recently, end-to-end (E2E) trained models for question answering over knowledge graphs (KGQA) have delivered promising results using only a weakly supervised dataset. However, these models are trained and evaluated in a setting where hand-annotated q
Externí odkaz:
http://arxiv.org/abs/2109.05817
End-to-end question answering using a differentiable knowledge graph is a promising technique that requires only weak supervision, produces interpretable results, and is fully differentiable. Previous implementations of this technique (Cohen et al.,
Externí odkaz:
http://arxiv.org/abs/2109.05808
Relation Extraction (RE) from tables is the task of identifying relations between pairs of columns of a table. Generally, RE models for this task require labelled tables for training. These labelled tables can also be generated artificially from a Kn
Externí odkaz:
http://arxiv.org/abs/2108.10750
End-to-end neural data-to-text (D2T) generation has recently emerged as an alternative to pipeline-based architectures. However, it has faced challenges in generalizing to new domains and generating semantically consistent text. In this work, we pres
Externí odkaz:
http://arxiv.org/abs/2004.06577
Autor:
Sen, Priyanka, Saffari, Amir
While models have reached superhuman performance on popular question answering (QA) datasets such as SQuAD, they have yet to outperform humans on the task of question answering itself. In this paper, we investigate if models are learning reading comp
Externí odkaz:
http://arxiv.org/abs/2004.03490
In this work, we provide a new formulation for Graph Convolutional Neural Networks (GCNNs) for link prediction on graph data that addresses common challenges for biomedical knowledge graphs (KGs). We introduce a regularized attention mechanism to GCN
Externí odkaz:
http://arxiv.org/abs/1812.00279
Generating novel molecules with optimal properties is a crucial step in many industries such as drug discovery. Recently, deep generative models have shown a promising way of performing de-novo molecular design. Although graph generative models are c
Externí odkaz:
http://arxiv.org/abs/1811.09766