Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective

A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the po...

Full description

Bibliographic Details
Main Authors: Davies, F. (Author), Guo, N. (Author), Guo, S. (Author), Hou, Y. (Author), Ma, C. (Author), Zhu, X. (Author)
Format: Article
Language:English
Published: Springer 2021
Subjects:
Online Access:View Fulltext in Publisher
LEADER 03040nam a2200385Ia 4500
001 10.3758-s13414-021-02281-6
008 220427s2021 CNT 000 0 und d
020 |a 19433921 (ISSN) 
245 1 0 |a Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective 
260 0 |b Springer  |c 2021 
856 |z View Fulltext in Publisher  |u https://doi.org/10.3758/s13414-021-02281-6 
520 3 |a A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain. © 2021, The Psychonomic Society, Inc. 
650 0 4 |a Categorization 
650 0 4 |a emotion 
650 0 4 |a Emotion 
650 0 4 |a Emotions 
650 0 4 |a face 
650 0 4 |a Face 
650 0 4 |a Face perception 
650 0 4 |a facial expression 
650 0 4 |a Facial expression 
650 0 4 |a Facial Expression 
650 0 4 |a human 
650 0 4 |a Humans 
650 0 4 |a Photic Stimulation 
650 0 4 |a photostimulation 
650 0 4 |a reaction time 
650 0 4 |a Reaction Time 
700 1 |a Davies, F.  |e author 
700 1 |a Guo, N.  |e author 
700 1 |a Guo, S.  |e author 
700 1 |a Hou, Y.  |e author 
700 1 |a Ma, C.  |e author 
700 1 |a Zhu, X.  |e author 
773 |t Attention, Perception, and Psychophysics