Semi-Supervised Multimodal Deep Learning Model for Polarity Detection in Arguments
Autor: | Dufresne Aude, Tato Ange, Frasson Claude, Nkambou Roger |
---|---|
Rok vydání: | 2018 |
Předmět: |
Artificial neural network
Contextual image classification business.industry Polarity (physics) Computer science Deep learning User modeling Sentiment analysis Feature extraction 02 engineering and technology computer.software_genre Convolutional neural network 020204 information systems 0202 electrical engineering electronic engineering information engineering Feature (machine learning) 020201 artificial intelligence & image processing Artificial intelligence business Feature learning computer Natural language processing |
Zdroj: | IJCNN |
DOI: | 10.1109/ijcnn.2018.8489342 |
Popis: | Deep learning has been successfully applied to many tasks such as image classification, feature learning, Text classification (sentiments analysis or opinion mining) etc. However, little research has focused on extracting polarity of sentiments expressed in text using a multimodal architecture. In other words, no researches take in consideration the multimodal nature of human behaviors before classifying sentiments. The representation of a person (also call User Modeling in some domains such as Intelligent Tutoring Systems) is an important feature to take in consideration if one wants to extract subjective information such as the polarity of sentiments expressed by the person. To design an effective representation of a user, it is important to consider all sources of data informing about its current state. We present a usersensitive deep multimodal architecture which takes advantage of deep learning and user data to extract a rich latent representation of a user. This rich latent representation mainly helps in text classification tasks. The architecture consists of the combination of a Long Short-Term Memory (LSTM), LSTM-Auto-Encoder, Convolutional Neural Networks and multiple Deep Neural Networks, in order to support the multimodality of data. The resulting model has been tested on a public multimodal dataset and is able to achieve best results compared to state-of-the-art algorithms for a similar task: detection of opinion polarity. The results suggest that the latent representation learnt from multimodal data helps in the discrimination of polarity of opinion. |
Databáze: | OpenAIRE |
Externí odkaz: |