Zobrazeno 1 - 10
of 1 361
pro vyhledávání: '"LIAN Zheng"'
Autor:
Lian, Zheng, Sun, Haiyang, Sun, Licai, Chen, Lan, Chen, Haoyu, Gu, Hao, Wen, Zhuofan, Chen, Shun, Zhang, Siyuan, Yao, Hailiang, Xu, Mingyu, Chen, Kang, Liu, Bin, Liu, Rui, Liang, Shan, Li, Ya, Yi, Jiangyan, Tao, Jianhua
Multimodal Emotion Recognition (MER) is an important research topic. This paper advocates for a transformative paradigm in MER. The rationale behind our work is that current approaches often rely on a limited set of basic emotion labels, which do not
Externí odkaz:
http://arxiv.org/abs/2410.01495
In the era of large language models (LLMs), the task of ``System I''~-~the fast, unconscious, and intuitive tasks, e.g., sentiment analysis, text classification, etc., have been argued to be successfully solved. However, sarcasm, as a subtle linguist
Externí odkaz:
http://arxiv.org/abs/2408.11319
Explainable Multimodal Emotion Recognition (EMER) is an emerging task that aims to achieve reliable and accurate emotion recognition. However, due to the high annotation cost, the existing dataset (denoted as EMER-Fine) is small, making it difficult
Externí odkaz:
http://arxiv.org/abs/2407.07653
Emotion and Intent Joint Understanding in Multimodal Conversation (MC-EIU) aims to decode the semantic information manifested in a multimodal conversational history, while inferring the emotions and intents simultaneously for the current utterance. M
Externí odkaz:
http://arxiv.org/abs/2407.02751
Autor:
Cheng, Zebang, Cheng, Zhi-Qi, He, Jun-Yan, Sun, Jingdong, Wang, Kai, Lin, Yuxiang, Lian, Zheng, Peng, Xiaojiang, Hauptmann, Alexander
Accurate emotion perception is crucial for various applications, including human-computer interaction, education, and counseling. However, traditional single-modality approaches often fail to capture the complexity of real-world emotional expressions
Externí odkaz:
http://arxiv.org/abs/2406.11161
Autor:
Lian, Zheng, Sun, Haiyang, Sun, Licai, Wen, Zhuofan, Zhang, Siyuan, Chen, Shun, Gu, Hao, Zhao, Jinming, Ma, Ziyang, Chen, Xie, Yi, Jiangyan, Liu, Rui, Xu, Kele, Liu, Bin, Cambria, Erik, Zhao, Guoying, Schuller, Björn W., Tao, Jianhua
Multimodal emotion recognition is an important research topic in artificial intelligence. Over the past few decades, researchers have made remarkable progress by increasing the dataset size and building more effective algorithms. However, due to prob
Externí odkaz:
http://arxiv.org/abs/2404.17113
Autor:
Wen, Zhuofan, Zhang, Fengyu, Zhang, Siyuan, Sun, Haiyang, Xu, Mingyu, Sun, Licai, Lian, Zheng, Liu, Bin, Tao, Jianhua
Multimodal fusion is a significant method for most multimodal tasks. With the recent surge in the number of large pre-trained models, combining both multimodal fusion methods and pre-trained model features can achieve outstanding performance in many
Externí odkaz:
http://arxiv.org/abs/2403.15044
Deception detection has attracted increasing attention due to its importance in real-world scenarios. Its main goal is to detect deceptive behaviors from multimodal clues such as gestures, facial expressions, prosody, etc. However, these bases are us
Externí odkaz:
http://arxiv.org/abs/2402.11432
Publikováno v:
Information Fusion, 2024
Audio-Visual Emotion Recognition (AVER) has garnered increasing attention in recent years for its critical role in creating emotion-ware intelligent machines. Previous efforts in this area are dominated by the supervised learning paradigm. Despite si
Externí odkaz:
http://arxiv.org/abs/2401.05698
Multimodal emotion recognition plays a crucial role in enhancing user experience in human-computer interaction. Over the past few decades, researchers have proposed a series of algorithms and achieved impressive progress. Although each method shows i
Externí odkaz:
http://arxiv.org/abs/2401.03429