Zobrazeno 1 - 10
of 87
pro vyhledávání: '"Adel, Heike"'
To ensure large language models contain up-to-date knowledge, they need to be updated regularly. However, model editing is challenging as it might also affect knowledge that is unrelated to the new data. State-of-the-art methods identify parameters a
Externí odkaz:
http://arxiv.org/abs/2410.02433
Advances in information extraction have enabled the automatic construction of large knowledge graphs (e.g., Yago, Wikidata or Google KG), which are widely used in many applications like semantic search or data analytics. However, due to their semi-au
Externí odkaz:
http://arxiv.org/abs/2409.07869
In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned. While pretrained language models have shown impressive cap
Externí odkaz:
http://arxiv.org/abs/2406.18708
Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear,
Externí odkaz:
http://arxiv.org/abs/2404.18585
Continual learning aims at incrementally acquiring new knowledge while not forgetting existing knowledge. To overcome catastrophic forgetting, methods are either rehearsal-based, i.e., store data examples from previous tasks for data replay, or isola
Externí odkaz:
http://arxiv.org/abs/2404.00790
Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Attribution scores indicate the importance of different input parts and can, thus, explain model behaviour. Currently, prompt-based models are gaining popularity, i.a., due to their easier adaptability in low-resource settings. However, the quality o
Externí odkaz:
http://arxiv.org/abs/2403.05338
Most languages of the world pose low-resource challenges to natural language processing models. With multilingual training, knowledge can be shared among languages. However, not all languages positively influence each other and it is an open research
Externí odkaz:
http://arxiv.org/abs/2310.15269
Word-level saliency explanations ("heat maps over words") are often used to communicate feature-attribution in text-based models. Recent studies found that superficial factors such as word length can distort human interpretation of the communicated s
Externí odkaz:
http://arxiv.org/abs/2305.02679
This paper describes our system developed for the SemEval-2023 Task 12 "Sentiment Analysis for Low-resource African Languages using Twitter Dataset". Sentiment analysis is one of the most widely studied applications in natural language processing. Ho
Externí odkaz:
http://arxiv.org/abs/2305.00090
SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains
Prompting pre-trained language models leads to promising results across natural language processing tasks but is less effective when applied in low-resource domains, due to the domain gap between the pre-training data and the downstream task. In this
Externí odkaz:
http://arxiv.org/abs/2302.06868