Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Min, Junghyun"'
Previous work in structured prediction (e.g. NER, information extraction) using single model make use of explicit dataset information, which helps boost in-distribution performance but is orthogonal to robust generalization in real-world situations.
Externí odkaz:
http://arxiv.org/abs/2402.08971
Unsupervised learning objectives like language modeling and de-noising constitute a significant part in producing pre-trained models that perform various downstream applications from natural language understanding to conversational tasks. However, de
Externí odkaz:
http://arxiv.org/abs/2402.08382
Pretrained neural models such as BERT, when fine-tuned to perform natural language inference (NLI), often show high accuracy on standard datasets, but display a surprising lack of sensitivity to word order on controlled challenge sets. We hypothesize
Externí odkaz:
http://arxiv.org/abs/2004.11999
If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we fine-tuned 100 instances of BERT on the Multi-genre Natural Language Infere
Externí odkaz:
http://arxiv.org/abs/1911.02969
Autor:
Kim EJ; Associate Professor, Department of Nursing, Gangneung-Wonju National University, Wonju, Korea., Lim JY; Professor, Department of Nursing, Inha University, Incheon, Korea., Kim GM; Associate Professor, Department of Nursing, Gangneung-Wonju National University, Wonju, Korea., Min J; Nursing Team Leader, Bundang Jesaeng Hospital, Seongnam, Korea.
Publikováno v:
Child health nursing research [Child Health Nurs Res] 2021 Apr; Vol. 27 (2), pp. 137-145. Date of Electronic Publication: 2021 Apr 30.