Spotlight

Why did the woman look back?

Have you ever found yourself doing a double-take without fully knowing why? Sometimes, a simple glance can reveal something that seems oddly familiar…
Why did the woman look back?

Have you ever found yourself doing a double-take without fully knowing why? Sometimes, a simple glance can reveal something that seems oddly familiar, and it sparks an instinctive reaction. In this story, a woman did just that—she looked back for a reason that was more than just a coincidence.

Why did the woman look back?

Explanation: That’s because the other boy has the same eye color and nose as her husband.

The reason she looked back was because she noticed something striking: the other boy had the same eye color and nose as her husband. It was an uncanny resemblance, enough to catch her attention and make her turn around for another look.

Often, subtle details like similar facial features can trigger deep emotions or memories. In this case, the woman couldn’t help but notice the similarities in the boy’s appearance, and it left her curious. It’s those unexpected, familiar traits that sometimes connect strangers in ways we can’t immediately explain.

What makes a face memorable? The relationship between face memory and emotional state reasoning

Abstract

Face processing models suggest a neural and functional dissociation exists between processing facial identity and expressive states. This may explain why the relationship between the ability to read emotional meaning from expressive cues and the ability to remember faces based on appearance cues remains a relatively uncharted area of inquiry. One of the fundamental ways that the human face differs from other visual objects is that people read complex social meaning into faces. Thus, we hypothesized that an important part of face memory is the extent to which people read such meaning into faces. Specifically, we predicted that (a) individual differences in the ability to decode emotional messages from expressive faces would be positively associated with the ability to encode and subsequently remember a separate set of neutral faces in the same participants, and (b) that stimulus-level differences in the extent to which a separate group of raters ascribed emotionality to these same neutral faces would also be positively associated with face memory. We report evidence supporting both of these hypotheses. These findings suggest that individual differences in emotional state reasoning from faces, on both the decoder and encoder levels, are meaningfully associated with the ability to remember facial identity.

Recommended Article Mary Berry’s chicken and bacon pie is the perfect comfort food for winter Mary Berry’s chicken and bacon pie is the perfect comfort food for winter

Introduction
The face is at the crux of social interaction. Extensive neural networks are dedicated to decoding various cues signaled by the face, such as one’s identity, emotion, gaze direction, race, age, or sex (Calder and Young, 2005, Haxby et al., 2000). Perhaps no other stimulus holds such a large amount of information in such a small and overlapping array of features as the face and no stimulus is as important in social behavior. When we see a face, we do more than encode facial features into memory, we read rich meaning into them. Thus, it stands to reason that individual differences in the ability to read mental and emotional meaning into a face might influence how memorable that face is. To date this proposed relationship remains untested, likely due to contemporary face processing models that have long emphasized a double dissociation between the processing of facial identity and expressive states (e.g., Bruce & Young, 1986; but see Calder & Young (2005)). In the following experiment, we examine the proposed relationship between emotional state reasoning and face memory by evaluating individual and stimulus differences in the two constructs and how they are related.

The human visual system has a remarkable ability to distinguish and remember faces based on only slight variations among differences in the structure and layout of facial features. Individual differences exist in the ability to recognize and remember faces (Duchaine & Nakayama, 2006). These differences are reliably consistent in tests of face recognition that use either frontal or profile images of the face and with high or low levels of randomly added white noise, providing strong evidence that consistent individual differences exist in the ability to remember faces. These individual differences are widely distributed, ranging from “super-recognizers” (Russell, Duchaine, & Nakayama, 2009), to those with a disorder called prosopagnosia that undermines the ability to recognize others at all.

Just as with face memory, individual differences exist in the ability to read meaning from nonverbal information and specifically to decode the emotional states and intentions of others. This ability is sometimes referred to as nonverbal sensitivity, mentalizing, mindreading, or theory of mind (Allison et al., 2000, Amodio and Frith, 2006, Brothers, 1990, Nowicki and Carton, 1993). Due to the extensive amount of social information present in the face, the face provides a unique window into reading the mental states of others. People can consistently determine basic emotional states from faces (e.g., Ekman, 1972) as well as read complex emotional states and intentions (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001). Deriving one’s emotional state by facial cues, whether basic or complex (henceforth referred to as emotional state decoding), is essential for smooth social interaction.

There exists high consistency in the ability to recognize facial displays of emotion cross-culturally (e.g., Ekman, 1972). However, more recent research has led to some revision of these original ideas of the universality of emotional decoding (Keltner, Ekman, Gonzaga, & Beer, 2003). Cultural and group differences exist in how participants read emotional states. For instance, Japanese participants rated emotional expressions by both Japanese and Caucasian participants as more intense than Caucasian participants rated those same expressions (Matsumoto & Ekman, 1989). Additionally, people show a significant advantage when decoding basic (Elfenbein & Ambady, 2002) and complex emotions displayed by their own cultural group members compared to those of other cultural groups (Adams et al., 2010). Just as cultural and group differences exist in how people read emotional meaning from faces, individual differences also exist in this ability (e.g., Baron-Cohen et al., 2001, Nowicki and Duke, 1994, Rosenthal et al., 1979). Thus, this suggests while emotional states may be to a degree universal, individual and cultural differences affect how emotional states are decoded from expressions.
In sum, emotional state reasoning and face memory are two important processes for social interaction. Individual differences exist in both of these abilities, so it is plausible they may be related. If so, skill at one of these abilities may be predictive of skill in the other. Examining this proposed relationship is the primary aim of the current work.
Access through your organization

Section snippets
Current study
The present experiment set out to test the relationship between emotional state decoding and face memory in two different ways. First, we examined the relationship between individual differences in emotional state decoding ability and face memory (Phase 1). Specifically, we hypothesized that people who are better able to read others’ emotional states based on standardized tests of emotional decoding ability would also better remember faces with neutral poses. Second, we examined the
Discussion
The current study tested the relationship between emotional state reasoning and face memory. This study is the first demonstration to our knowledge that a relationship exists between individual differences in emotional state decoding ability and face memory either at the observer or stimulus-level.
In the first phase of this study, scores on the DANVA alone predicted face memory. The DANVA is a test that examines decoding basic emotions from full-face displays. This contrasts with the Eyes Test,
Acknowledgements
This research was supported by a University Graduate Fellowship to RGF, Jr. and an NSF Science Foundation Research Grant (0544533) to RBA, Jr. We are grateful to Tracy Dent, Christine Dziewit, Mrunal Shah, and Katie VanHorn for their help in data collection and Anthony Nelson and Michael Stevenson for helpful comments on an earlier draft of this manuscript.

News Feed