Zobrazeno 1 - 10
of 49
pro vyhledávání: '"Jang, Myeongjun"'
In this work, we introduce and analyze an approach to knowledge transfer from one collection of facts to another without the need for entity or relation matching. The method works for both canonicalized knowledge bases and uncanonicalized or open kno
Externí odkaz:
http://arxiv.org/abs/2401.15439
Publikováno v:
The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)
The non-humanlike behaviour of contemporary pre-trained language models (PLMs) is a leading cause undermining their trustworthiness. A striking phenomenon of such faulty behaviours is the generation of inconsistent predictions, which produces logical
Externí odkaz:
http://arxiv.org/abs/2310.15541
Autor:
Jang, Myeongjun, Majumder, Bodhisattwa Prasad, McAuley, Julian, Lukasiewicz, Thomas, Camburu, Oana-Maria
Publikováno v:
The 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023)
While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated N
Externí odkaz:
http://arxiv.org/abs/2306.02980
Publikováno v:
The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)
ChatGPT has gained a huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, adding extra support to the c
Externí odkaz:
http://arxiv.org/abs/2303.06273
The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs, is an important property that a trustworthy language model must satisfy. However, much recent evidence shows that large-size pre-tra
Externí odkaz:
http://arxiv.org/abs/2205.03815
A well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. As modern language models (LMs) have become more elaborate and
Externí odkaz:
http://arxiv.org/abs/2204.04541
Autor:
Jang, Myeongjun, Lukasiewicz, Thomas
The recent development in pretrained language models trained in a self-supervised fashion, such as BERT, is driving rapid progress in the field of NLP. However, their brilliant performance is based on leveraging syntactic artifacts of the training da
Externí odkaz:
http://arxiv.org/abs/2110.02054
Autor:
Jang, Myeongjun, Lukasiewicz, Thomas
Natural language free-text explanation generation is an efficient approach to train explainable language processing models for commonsense-knowledge-requiring tasks. The most predominant form of these models is the explain-then-predict (EtP) structur
Externí odkaz:
http://arxiv.org/abs/2110.02056
Consistency, which refers to the capability of generating the same predictions for semantically similar contexts, is a highly desirable property for a sound language understanding model. Although recent pretrained language models (PLMs) deliver outst
Externí odkaz:
http://arxiv.org/abs/2108.06665
Autor:
Jang, Myeongjun, Kang, Pilsung
Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various N
Externí odkaz:
http://arxiv.org/abs/1901.05219