Judging the emotional nature of a scene requires us to deliberately integrate pieces of evidence with varying intensities of emotion. Our existing knowledge about emotion-related perceptual decision-making is largely based on paradigms using single stimulus and, when involving multiple stimuli, rapid decisions. Consequently, it remains unclear how we sample and integrate multiple pieces of emotional evidence deliberately to form an overall judgment. Findings from non-emotion rapid decision-making studies show humans down-sample and downweight extreme evidence. However, deliberate decision-making may rely on a different attention mode than in rapid decision-making; and extreme emotional stimuli are inherently salient. Given these critical differences, it is imperative to directly examine the deliberate decision-making process about multiple emotional stimuli. In the current study, human participants (N = 33) viewed arrays of faces with expressions ranging from extremely fearful to extremely happy freely with their eye movement tracked. They then decided whether the faces were more fearful or happier on average. In contrast to conclusions drawn from non-emotion and rapid decision-making studies, eye movement measures revealed that participants attentionally sampled extreme emotional evidence more than less extreme evidence. Computational modeling results indicated that even though participants exhibited biased attention distribution, they weighted various emotional evidence equally. These findings provide novel insights into how people sample and integrate multiple pieces of emotional evidence, contribute to a more comprehensive understanding of emotion-related decision-making, and shed light on the mechanisms of pathological affective decisions.
Keywords: computational modeling; emotion; eye‐tracking; multi‐evidence; perceptual decision‐making.
© 2024 Society for Psychophysiological Research.