Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Phu Mon Htut"'
Publikováno v:
Proceedings of the Third Workshop on Insights from Negative Results in NLP.
Autor:
Phu Mon Htut, William C. Huang, Samuel R. Bowman, Haokun Liu, Jason Phang, Clara Vania, Richard Yuanzhe Pang, Kyunghyun Cho, Dhara A. Mungra
Publikováno v:
ACL/IJCNLP (1)
Recent years have seen numerous NLP datasets introduced to evaluate the performance of fine-tuned models on natural language understanding tasks. Recent results from large pretrained models, though, show that many of these datasets are largely satura
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a6312188ae4bcb9814b2733c9e88f024
http://arxiv.org/abs/2106.00840
http://arxiv.org/abs/2106.00840
Autor:
Yada Pruksachatkun, Ian Tenney, Haokun Liu, Philip Yeres, Samuel R. Bowman, Jason Phang, Alex Wang, Phu Mon Htut
Publikováno v:
ACL (demo)
We introduce jiant, an open source toolkit for conducting multitask and transfer learning experiments on English NLU tasks. jiant enables modular and configuration-driven experimentation with state-of-the-art models and implements a broad set of task
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a142207d49aa77bc9befc3b26a7ca2ef
Autor:
Richard Yuanzhe Pang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Yada Pruksachatkun, Clara Vania, Katharina Kann, Samuel R. Bowman, Jason Phang
Publikováno v:
ACL
While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However
Autor:
Samuel R. Bowman, Shikha Bordia, Yining Nie, Hagen Blix, Yu Cao, Anhad Mohananey, Jason Phang, Ioana Grosu, Wei Peng, Alicia Parrish, Alex Warstadt, Sheng-Fu Wang, Haokun Liu, Paloma Jeretic, Phu Mon Htut, Anna Alsop
Publikováno v:
EMNLP/IJCNLP (1)
Though state-of-the-art sentence representation models can perform tasks requiring significant knowledge of grammar, it is an open question how best to evaluate their grammatical knowledge. We explore five experimental methods inspired by prior work
Autor:
Phu Mon Htut, Joel Tetreault
Publikováno v:
BEA@ACL
In recent years, sequence-to-sequence models have been very effective for end-to-end grammatical error correction (GEC). As creating human-annotated parallel corpus for GEC is expensive and time-consuming, there has been work on artificial corpus gen
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cfaf130e1c6b41fe80ee6e391c622db7
Publikováno v:
BlackboxNLP@EMNLP
A substantial thread of recent work on latent tree learning has attempted to develop neural network models with parse-valued latent variables and train them on non-parsing tasks, in the hope of having them discover interpretable tree structure. In a
Publikováno v:
NAACL-HLT (Student Research Workshop)
In recent years, there have been amazing advances in deep learning methods for machine reading. In machine reading, the machine reader has to extract the answer from the given ground truth paragraph. Recently, the state-of-the-art machine reading mod
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::07866deb711778b96eb574f42aca2c78