Autor: |
McDuff, Daniel, Amr, May, Kaliouby, Rana el |
Zdroj: |
IEEE Transactions on Affective Computing; Jan2019, Vol. 10 Issue 1, p7-17, 11p |
Abstrakt: |
Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the conditions observed in many real-world applications. We present AM-FED+ an extended dataset of naturalistic facial response videos collected in everyday settings. The dataset contains 1,044 videos of which 545 videos (263,705 frames or 21,859 seconds) have been comprehensively manually coded for facial action units. These videos act as a challenging benchmark for automated facial coding systems. All the videos contain gender labels and a large subset (77 percent) contain age and country information. Subject self-reported liking and familiarity with the stimuli are also included. We provide automated facial landmark detection locations for the videos. Finally, baseline action unit classification results are presented for the coded videos. The dataset is available to download online: https://www.affectiva.com/facial-expression-dataset/ [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|