Zobrazeno 1 - 10
of 15
pro vyhledávání: '"He, Ruidan"'
Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc. As the AI debate attracts more attenti
Externí odkaz:
http://arxiv.org/abs/2203.12257
Document-level Relation Extraction (DocRE) is a more challenging task compared to its sentence-level counterpart. It aims to extract relations from multiple sentences at once. In this paper, we propose a semi-supervised framework for DocRE with three
Externí odkaz:
http://arxiv.org/abs/2203.10900
Knowledge-enhanced language representation learning has shown promising results across various knowledge-intensive NLP tasks. However, prior methods are limited in efficient utilization of multilingual knowledge graph (KG) data for language model (LM
Externí odkaz:
http://arxiv.org/abs/2111.10962
Data augmentation is an effective solution to data scarcity in low-resource scenarios. However, when applied to token-level tasks such as NER, data augmentation methods often suffer from token-label misalignment, which leads to unsatsifactory perform
Externí odkaz:
http://arxiv.org/abs/2108.13655
Autor:
He, Ruidan, Liu, Linlin, Ye, Hai, Tan, Qingyu, Ding, Bosheng, Cheng, Liying, Low, Jia-Wei, Bing, Lidong, Si, Luo
Adapter-based tuning has recently arisen as an alternative to fine-tuning. It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task. A
Externí odkaz:
http://arxiv.org/abs/2106.03164
Publikováno v:
IJCAI-PRICAI2020
Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements over various cross-lingual and low-resource tasks. Through training on one hundred languages and terab
Externí odkaz:
http://arxiv.org/abs/2011.11499
BERT is inefficient for sentence-pair tasks such as clustering or semantic search as it needs to evaluate combinatorially many sentence pairs which is very time-consuming. Sentence BERT (SBERT) attempted to solve this challenge by learning semantical
Externí odkaz:
http://arxiv.org/abs/2009.12061
Adapting pre-trained language models (PrLMs) (e.g., BERT) to new domains has gained much attention recently. Instead of fine-tuning PrLMs as done in most previous work, we investigate how to adapt the features of PrLMs to new domains without fine-tun
Externí odkaz:
http://arxiv.org/abs/2009.11538
Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence. This task is usually done in a pipeline manner, with aspect term extraction performed first, followed by sentiment pre
Externí odkaz:
http://arxiv.org/abs/1906.06906
We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain. Our approach explicitly minimizes the distance between the source and the targ
Externí odkaz:
http://arxiv.org/abs/1809.00530