Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Kavumba, Pride"'
Finetuning large pre-trained language models with a task-specific head has advanced the state-of-the-art on many natural language understanding benchmarks. However, models with a task-specific head require a lot of training data, making them suscepti
Externí odkaz:
http://arxiv.org/abs/2205.09295
We present Semi-Structured Explanations for COPA (COPA-SSE), a new crowdsourced dataset of 9,747 semi-structured, English common sense explanations for Choice of Plausible Alternatives (COPA) questions. The explanations are formatted as a set of trip
Externí odkaz:
http://arxiv.org/abs/2201.06777
Improving model generalization on held-out data is one of the core objectives in commonsense reasoning. Recent work has shown that models trained on the dataset with superficial cues tend to perform well on the easy test set with superficial cues but
Externí odkaz:
http://arxiv.org/abs/2104.11514
Autor:
Kavumba, Pride, Inoue, Naoya, Heinzerling, Benjamin, Singh, Keshav, Reisert, Paul, Inui, Kentaro
Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA. However, recent work found that many improvements in benchmarks of natural language understanding are not due to models l
Externí odkaz:
http://arxiv.org/abs/1911.00225
Autor:
KAVUMBA, Pride
Publikováno v:
東北大学電通談話会記録. 89(2):18-19