68‐3: DeepFatigueNet: A Model for Automatic Visual Fatigue Assessment Based on Raw Single‐Channel EEG.

Autor: Song, Yaguang, Wang, Danli, Yue, Kang, Zheng, Nan
Předmět:
Zdroj: SID Symposium Digest of Technical Papers; Jun2019, Vol. 50 Issue 1, p965-968, 4p
Abstrakt: Three‐dimension (3D) display has become increasingly popular in many fields. However, watching 3D content continuously can lead to visual fatigue that is harmful to users' vision system. Visual fatigue assessment aims at monitoring users' brain states based on the electroencephalogram (EEG) signals to identify different fatigue levels and avoid severe fatigue. Most of existing studies on the modeling of visual fatigue assessment rely on manual features extracted from EEG, which is time‐consuming and needs prior knowledge. Convolutional Neural Networks (CNNs) which have been used in computer vision, speech recognition have attracted increasing interest. There is still a lack of attempts to employ end‐to‐end EEG analysis on visual fatigue assessment. In this paper, we propose a deep learning model DeepFatigueNet to perform automatic feature extraction and classification from raw single‐channel EEG. The DeepFatigueNet is evaluated on our own visual fatigue dataset and compared with the state‐of‐the‐art deep learning methods for EEG‐based tasks. The overall accuracy of DeepFatigueNet reaches 75.9% on the three‐classification task exceeding other models. The experimental results demonstrate the effectiveness of our model and show the potential of deep convolutional neural networks for end‐to‐end visual fatigue assessment. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index