Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Peng, Xutan"'
Run-Length Encoding (RLE) is one of the most fundamental tools in data compression. However, its compression power drops significantly if there lacks consecutive elements in the sequence. In extreme cases, the output of the encoder may require more s
Externí odkaz:
http://arxiv.org/abs/2312.17024
Autor:
Li, Chen, Peng, Xutan, Wang, Teng, Ge, Yixiao, Liu, Mengyang, Xu, Xuyuan, Wang, Yexin, Shan, Ying
Art forms such as movies and television (TV) dramas are reflections of the real world, which have attracted much attention from the multimodal learning community recently. However, existing corpora in this domain share three limitations: (1) annotate
Externí odkaz:
http://arxiv.org/abs/2306.14644
Multi-Modal Relation Extraction (MMRE) aims at identifying the relation between two entities in texts that contain visual clues. Rich visual content is valuable for the MMRE task, but existing works cannot well model finer associations among differen
Externí odkaz:
http://arxiv.org/abs/2306.11020
Although it has been demonstrated that Natural Language Processing (NLP) algorithms are vulnerable to deliberate attacks, the question of whether such weaknesses can lead to software security threats is under-explored. To bridge this gap, we conducte
Externí odkaz:
http://arxiv.org/abs/2211.15363
Autor:
Li, Qian, Li, Jianxin, Wu, Jia, Peng, Xutan, Ji, Cheng, Peng, Hao, Wang, Lihong, Yu, Philip S.
Publikováno v:
In Neural Networks November 2024 179
Event Extraction bridges the gap between text and event signals. Based on the assumption of trigger-argument dependency, existing approaches have achieved state-of-the-art performance with expert-designed templates or complicated decoding constraints
Externí odkaz:
http://arxiv.org/abs/2110.04525
In this paper, we provide the first focused study on the discontinuities (aka. holes) in the latent space of Variational Auto-Encoders (VAEs), a phenomenon which has been shown to have a detrimental effect on model capacity. When investigating latent
Externí odkaz:
http://arxiv.org/abs/2110.03318
Publikováno v:
NAACL-HLT 2021
Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages in a shared high-dimensional space in which vectors representing words with similar meaning (regardless of language) are closely located. Existing methods for building high
Externí odkaz:
http://arxiv.org/abs/2104.04916
Publikováno v:
NAACL-HLT 2021
Knowledge Graph Embeddings (KGEs) have been intensively explored in recent years due to their promise for a wide range of applications. However, existing studies focus on improving the final model performance without acknowledging the computational c
Externí odkaz:
http://arxiv.org/abs/2104.04676
Publikováno v:
EACL 2021
We introduce the task of historical text summarisation, where documents in historical forms of a language are summarised in the corresponding modern language. This is a fundamentally important routine to historians and digital humanities researchers
Externí odkaz:
http://arxiv.org/abs/2101.10759