Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans.

Autor: Parde CJ; School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA., Strehle VE; School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA., Banerjee V; University of Maryland Institute of Advanced Computer Studies, University of Maryland, USA., Hu Y; School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA., Cavazos JG; School of Education, University of California Irvine, USA., Castillo CD; Whiting School of Engineering, Johns Hopkins University, USA., O'Toole AJ; School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA.
Jazyk: angličtina
Zdroj: ACM transactions on applied perception [ACM Trans Appl Percept] 2023 Jul; Vol. 20 (3).
DOI: 10.1145/3609224
Abstrakt: Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants ( N = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45° profile, and frontal to 90°profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general-imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range r = 0.38 to r = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.
Databáze: MEDLINE