Zobrazeno 1 - 10
of 1 360
pro vyhledávání: '"Liu, Jingping"'
Previous works of negation understanding mainly focus on negation cue detection and scope resolution, without identifying negation subject which is also significant to the downstream tasks. In this paper, we propose a new negation triplet extraction
Externí odkaz:
http://arxiv.org/abs/2404.09830
Autor:
Chen, Zhen, Liu, Jingping, Yang, Deqing, Xiao, Yanghua, Xu, Huimin, Wang, Zongyu, Xie, Rui, Xian, Yunsen
Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence. Compared with general information extraction (IE), OpenIE poses more challenges for
Externí odkaz:
http://arxiv.org/abs/2401.11107
Autor:
Zhu, Xiangru, Sun, Penglei, Wang, Chengyu, Liu, Jingping, Li, Zhixu, Xiao, Yanghua, Huang, Jun
Text-to-image (T2I) synthesis has recently achieved significant advancements. However, challenges remain in the model's compositionality, which is the ability to create new combinations from known components. We introduce Winoground-T2I, a benchmark
Externí odkaz:
http://arxiv.org/abs/2312.02338
Autor:
Zhu, Tinghui, Liu, Jingping, Liang, Jiaqing, Jiang, Haiyun, Xiao, Yanghua, Wang, Zongyu, Xie, Rui, Xian, Yunsen
Taxonomy expansion task is essential in organizing the ever-increasing volume of new concepts into existing taxonomies. Most existing methods focus exclusively on using textual semantics, leading to an inability to generalize to unseen terms and the
Externí odkaz:
http://arxiv.org/abs/2309.06105
Autor:
Yang, Jingsong, Han, Guanzhou, Yang, Deqing, Liu, Jingping, Xiao, Yanghua, Xu, Xiang, Wu, Baohua, Ni, Shenghua
POI tagging aims to annotate a point of interest (POI) with some informative tags, which facilitates many services related to POIs, including search, recommendation, and so on. Most of the existing solutions neglect the significance of POI images and
Externí odkaz:
http://arxiv.org/abs/2306.10079
In light of the success of the pre-trained language models (PLMs), continual pre-training of generic PLMs has been the paradigm of domain adaption. In this paper, we propose QUERT, A Continual Pre-trained Language Model for QUERy Understanding in Tra
Externí odkaz:
http://arxiv.org/abs/2306.06707
Autor:
Gu, Zhouhong, Zhu, Xiaoxuan, Ye, Haoning, Zhang, Lin, Wang, Jianchen, Zhu, Yixin, Jiang, Sihang, Xiong, Zhuozhi, Li, Zihan, Wu, Weijie, He, Qianyu, Xu, Rui, Huang, Wenhao, Liu, Jingping, Wang, Zili, Wang, Shusen, Zheng, Weiguo, Feng, Hongwei, Xiao, Yanghua
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align with the rapid development of large language models (LLMs). We present Xiezhi, the most comprehensive evaluation suite designed to assess holistic domain knowledge. Xiezhi com
Externí odkaz:
http://arxiv.org/abs/2306.05783
Autor:
Gu, Zhouhong, Jiang, Sihang, Liu, Jingping, Xiao, Yanghua, Feng, Hongwei, Li, Zhixu, Liang, Jiaqing, Zhong, Jian
Taxonomy is formulated as directed acyclic concepts graphs or trees that support many downstream tasks. Many new coming concepts need to be added to an existing taxonomy. The traditional taxonomy expansion task aims only at finding the best position
Externí odkaz:
http://arxiv.org/abs/2303.14480
Continual pretraining is a popular way of building a domain-specific pretrained language model from a general-domain language model. In spite of its high efficiency, continual pretraining suffers from catastrophic forgetting, which may harm the model
Externí odkaz:
http://arxiv.org/abs/2211.11363
Autor:
Liu, Jingping, Song, Yuqiu, Xue, Kui, Sun, Hongli, Wang, Chao, Chen, Lihan, Jiang, Haiyun, Liang, Jiaqing, Ruan, Tong
Prompt tuning is an emerging way of adapting pre-trained language models to downstream tasks. However, the existing studies are mainly to add prompts to the input sequence. This way would not work as expected due to the intermediate multi-head self-a
Externí odkaz:
http://arxiv.org/abs/2206.15312