Facial Expression Recognition Under Partial Occlusion from Virtual Reality Headsets based on Transfer Learning
Autor: | Naimul Mefraz Khan, Bita Houshmand |
---|---|
Rok vydání: | 2023 |
Předmět: |
FOS: Computer and information sciences
Facial expression Channel (digital image) Computer science Computer Vision and Pattern Recognition (cs.CV) Face (geometry) Headset Speech recognition Benchmark (computing) Computer Science - Computer Vision and Pattern Recognition Virtual reality Transfer of learning Convolutional neural network |
Zdroj: | BigMM |
DOI: | 10.32920/22734302.v1 |
Popis: | Facial expressions of emotion are a major channel in our daily communications, and it has been subject of intense research in recent years. To automatically infer facial expressions, convolutional neural network based approaches has become widely adopted due to their proven applicability to Facial Expression Recognition (FER) task.On the other hand Virtual Reality (VR) has gained popularity as an immersive multimedia platform, where FER can provide enriched media experiences. However, recognizing facial expression while wearing a head-mounted VR headset is a challenging task due to the upper half of the face being completely occluded. In this paper we attempt to overcome these issues and focus on facial expression recognition in presence of a severe occlusion where the user is wearing a head-mounted display in a VR setting. We propose a geometric model to simulate occlusion resulting from a Samsung Gear VR headset that can be applied to existing FER datasets. Then, we adopt a transfer learning approach, starting from two pretrained networks, namely VGG and ResNet. We further fine-tune the networks on FER+ and RAF-DB datasets. Experimental results show that our approach achieves comparable results to existing methods while training on three modified benchmark datasets that adhere to realistic occlusion resulting from wearing a commodity VR headset. Code for this paper is available at: https://github.com/bita-github/MRP-FER To be presented at the IEEE BigMM 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |