Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Zhu, Junnan"'
Autor:
Zhou, Weixiao, Li, Gengyao, Cheng, Xianfu, Liang, Xinnian, Zhu, Junnan, Zhai, Feifei, Li, Zhoujun
Dialogue summarization involves a wide range of scenarios and domains. However, existing methods generally only apply to specific scenarios or domains. In this study, we propose a new pre-trained model specifically designed for multi-scenario multi-d
Externí odkaz:
http://arxiv.org/abs/2310.10285
Multimodal summarization usually suffers from the problem that the contribution of the visual modality is unclear. Existing multimodal summarization approaches focus on designing the fusion methods of different modalities, while ignoring the adaptive
Externí odkaz:
http://arxiv.org/abs/2307.02716
A common scenario of Multilingual Neural Machine Translation (MNMT) is that each translation task arrives in a sequential manner, and the training data of previous tasks is unavailable. In this scenario, the current methods suffer heavily from catast
Externí odkaz:
http://arxiv.org/abs/2212.02800
Role-oriented dialogue summarization is to generate summaries for different roles in the dialogue, e.g., merchants and consumers. Existing methods handle this task by summarizing each role's content separately and thus are prone to ignore the informa
Externí odkaz:
http://arxiv.org/abs/2205.13190
Dialogue summarization has drawn much attention recently. Especially in the customer service domain, agents could use dialogue summaries to help boost their works by quickly knowing customer's issues and service progress. These applications require s
Externí odkaz:
http://arxiv.org/abs/2108.13139
Autor:
Zhou, Aojun, Ma, Yukun, Zhu, Junnan, Liu, Jianbo, Zhang, Zhijie, Yuan, Kun, Sun, Wenxiu, Li, Hongsheng
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments. It can be generally categorized into unstructured fine-grained sparsity that zeroes out multiple individual we
Externí odkaz:
http://arxiv.org/abs/2102.04010
End-to-end speech translation aims to translate speech in one language into text in another language via an end-to-end way. Most existing methods employ an encoder-decoder structure with a single encoder to learn acoustic representation and semantic
Externí odkaz:
http://arxiv.org/abs/2010.14920
Autor:
Zhu, Junnan, Wang, Qian, Wang, Yining, Zhou, Yu, Zhang, Jiajun, Wang, Shaonan, Zong, Chengqing
Cross-lingual summarization (CLS) is the task to produce a summary in one particular language for a source document in a different language. Existing methods simply divide this task into two steps: summarization and translation, leading to the proble
Externí odkaz:
http://arxiv.org/abs/1909.00156
Publikováno v:
In Knowledge-Based Systems 10 January 2023 259
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.