Zobrazeno 1 - 10
of 53
pro vyhledávání: '"Ulges, Adrian"'
Current publicly available knowledge work data collections lack diversity, extensive annotations, and contextual information about the users and their documents. These issues hinder objective and comparable data-driven evaluations and optimizations o
Externí odkaz:
http://arxiv.org/abs/2409.04286
Autor:
Lamott, Marcel, Weweler, Yves-Noel, Ulges, Adrian, Shafait, Faisal, Krechel, Dirk, Obradovic, Darko
Recent advances in training large language models (LLMs) using massive amounts of solely textual data lead to strong generalization across many domains and tasks, including document-specific tasks. Opposed to that there is a trend to train multi-moda
Externí odkaz:
http://arxiv.org/abs/2402.09841
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that su
Externí odkaz:
http://arxiv.org/abs/2301.00716
We address contextualized code retrieval, the search for code snippets helpful to fill gaps in a partial input program. Our approach facilitates a large-scale self-supervised contrastive training by splitting source code randomly into contexts and ta
Externí odkaz:
http://arxiv.org/abs/2204.11594
Autor:
Eberts, Markus, Ulges, Adrian
We present a joint model for entity-level relation extraction from documents. In contrast to other approaches - which focus on local intra-sentence mention pairs and thus require annotations on mention level - our model operates on entity level. To d
Externí odkaz:
http://arxiv.org/abs/2102.05980
Entity linking, the task of mapping textual mentions to known entities, has recently been tackled using contextualized neural networks. We address the question whether these results -- reported for large, high-quality datasets such as Wikipedia -- tr
Externí odkaz:
http://arxiv.org/abs/2005.07604
Autor:
Eberts, Markus, Ulges, Adrian
We introduce SpERT, an attention model for span-based joint entity and relation extraction. Our key contribution is a light-weight reasoning on BERT embeddings, which features entity recognition and filtering, as well as relation classification with
Externí odkaz:
http://arxiv.org/abs/1909.07755
In retrieval applications, binary hashes are known to offer significant improvements in terms of both memory and speed. We investigate the compression of sentence embeddings using a neural encoder-decoder architecture, which is trained by minimizing
Externí odkaz:
http://arxiv.org/abs/1908.05541
Publikováno v:
AAAI-19 Vol 33 (2019) 3044-3051
We present a novel extension to embedding-based knowledge graph completion models which enables them to perform open-world link prediction, i.e. to predict facts for entities unseen in training based on their textual description. Our model combines a
Externí odkaz:
http://arxiv.org/abs/1906.08382
Autor:
Abid, Nosheen, Shahzad, Muhammad, Malik, Muhammad Imran, Schwanecke, Ulrich, Ulges, Adrian, Kovács, György, Shafait, Faisal
Publikováno v:
In International Journal of Applied Earth Observation and Geoinformation 25 December 2021 105