Zobrazeno 1 - 10
of 158
pro vyhledávání: '"Hwee Tou Ng"'
Publikováno v:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Publikováno v:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
Autor:
Ruixi Lin, Hwee Tou Ng
Publikováno v:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
Autor:
Tapas Nayak, Hwee Tou Ng
Publikováno v:
RANLP
Distantly supervised datasets for relation extraction mostly focus on sentence-level extraction, and they cover very few relations. In this work, we propose cross-document relation extraction, where the two entities of a relation tuple appear in two
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2b4913857886d7d360da9b1b22601c00
http://arxiv.org/abs/2108.09505
http://arxiv.org/abs/2108.09505
Publikováno v:
Findings of the Association for Computational Linguistics: EMNLP 2021.
Publikováno v:
Findings of the Association for Computational Linguistics: EMNLP 2021.
Autor:
Ruixi Lin, Hwee Tou Ng
Publikováno v:
RANLP
In this paper, we propose a system combination method for grammatical error correction (GEC), based on nonlinear integer programming (IP). Our method optimizes a novel F score objective based on error types, and combines multiple end-to-end GEC syste
Publikováno v:
COLING
Syntactic dependency parsing is an important task in natural language processing. Unsupervised dependency parsing aims to learn a dependency parser from sentences that have no annotation of their correct parse trees. Despite its difficulty, unsupervi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::098d9c47a033123c6b0d5fa74c8857e4
http://arxiv.org/abs/2010.01535
http://arxiv.org/abs/2010.01535
Publikováno v:
IJCAI
Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements over various cross-lingual and low-resource tasks. Through training on one hundred languages and terab
Publikováno v:
EMNLP (1)
Adapting pre-trained language models (PrLMs) (e.g., BERT) to new domains has gained much attention recently. Instead of fine-tuning PrLMs as done in most previous work, we investigate how to adapt the features of PrLMs to new domains without fine-tun
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::44a7cddc298db23e514430d1b9635f4c