Skip to main content

Verified by Psychology Today

Bias

How Do We Perceive Faces in Our Peripheral Vision?

New research shows we detect specific face features in our peripheral vision.

Key points

  • Our peripheral vision is severely limited by blur and masking, making it difficult to recognize objects.
  • Despite this, research has shown that we quickly and automatically direct our attention to peripheral faces.
  • New research shows we also detect peripheral features (like the eyes), suggesting a more sophisticated peripheral face processing system.

Most of us have at some point felt the sensation that someone is staring at us in our peripheral vision. A quick glance is all it takes to confirm or deny this sensation. Sometimes there really is a face looking at us; sometimes the face is not actually looking at us; and sometimes there is no face at all.

The visual system is built in such a way that our central vision is much more detailed and precise than our peripheral vision. In order to recognize objects, interpret faces, or read text, we must make eye movements toward the item of interest. These fixational eye movements serve to bring the item of interest to our central vision, where we can rely on high-density retinal cones and cortical magnification to perceive the stimulus in its maximum detail.

Fast, Automatic Fixations to Faces

Sometimes our eye movements toward peripheral stimuli are voluntary and sometimes they happen without our awareness. Research by Sébastien Crouzet and colleagues in 2010 showed that faces in the periphery quickly and automatically guide our eye movements. The researchers developed a saccadic choice task where participants are shown two images side by side and they must quickly look at the image of interest. For example, in one block, participants might be instructed to look at the vehicle in each trial (and ignore the face), and in another block, they might be instructed to look at the face (and ignore the vehicle).

By analyzing the speed and accuracy of participants' eye movements, Crouzet and colleagues discovered two intriguing facts: (1) faces elicit extremely fast fixational eye movements that are initiated within about 100 ms—much faster than other image categories like vehicles or animals—and (2) people often make unintentional eye movements toward faces, even when they are instructed to ignore the face. This research demonstrated a fast and automatic tendency to look at peripheral faces, but left open the question: what do we actually see when we see a face in the periphery?

In order to complete the saccadic choice task, participants in Crouzet et al.'s study had to perceive at least enough detail about the peripheral face to distinguish it from the lure categories (vehicles or animals). But since faces differ so greatly from vehicles and animals, participants could have relied on general image statistic (i.e., overall dark and light contrast patterns of faces) or even something as simple as the basic outline of the face, to make their saccadic choice.

Detection of Peripheral Face Features

New research by Nicole Han and colleagues (2021) suggests that our peripheral vision actually represents much more than that.

In their study, participants were first instructed to fixate on a corner of a screen, and then were briefly shown a blurred face at the center of the screen. Participants had a fraction of a second (600 ms) to look at the face in order to later recognize it against a set of lures. Importantly, in some trials, the blurred faces were intact while in other trials they had features removed or placed in the wrong position on the face (e.g., the nose being placed above the mouth). Han and colleagues measured the location within the face where participants’ eye movements landed.

Consistent with prior research (e.g., Peterson and Eckstein, 2012), most participants gravitated to look at a particular spot on the face: just below the eyes (a position that is thought to be optimal for face recognition). Intriguingly, this looking behavior was consistent regardless of the jumbling of the face features. That is, whether the faces were arranged normally or jumbled by placing the eyes below the mouth and nose, participants still made their eye movements to land just below the eyes of the face.

This result demonstrates that our peripheral vision for faces is more sophisticated than we initially thought. When a face appears in our periphery, we don’t just see a blurry face-like stimulus and then look to the place where the eyes are expected to be; rather, our visual system actually detects specific features—importantly, the eyes—and guides eye our movements toward those features.

A Built-In Face Detection System for Peripheral Vision?

The findings of Han and colleagues add to a growing set of evidence that suggests our peripheral vision may have its own built-in face detection system. Researchers such as Crouzet and colleagues have previously argued that the time it takes to initiate an eye movement toward a peripheral face (about 100 ms) is too fast to be explained by the typical visual pathways, where information in the retina gets relayed to the lateral geniculate nuclei (LGN), passed on to early visual cortex (V1), processed by additional visual cortical areas (V2-V4), and then ultimately perceived as a face by the face-processing regions including the occipital face area (OFA, or the Inferior Occipital Gyrus, IOG-faces) and the fusiform face areas (FFA, or pFus-faces and mFus-faces, the posterior and middle fusiform face regions).

Given the number of synapses involved in this pathway, this processing stream would require at least 170 ms of processing before an eye movement could be initiated. Instead, eye movement data point toward a faster (perhaps subcortical) pathway that can accomplish this in a fraction of the time. The new findings by Han and colleagues suggest that this fast face processing system is not just tuned in to general face-like stimuli, but can actually identify specific facial features (like the eyes), regardless of where they are positioned on the face.

Future research may investigate this question further by asking what happens when faces are presented in the extreme periphery. At what point in the periphery does the ability to perceive facial features finally break down? Do we really have the ability to sense someone staring at us from the corner of our eye, or do we just have a bias to assume that is happening? And how does the ability (or bias) to detect peripheral faces vary across individuals?

References

Han, N. X., Chakravarthula, P. N., & Eckstein, M. P. (2021). Peripheral Facial Features Guiding Eye Movements and Reducing Fixational Variability.

Crouzet, S. M., Kirchner, H., & Thorpe, S. J. (2010). Fast saccades toward faces: face detection in just 100 ms. Journal of vision, 10(4), 16-16.

Peterson, M. F., & Eckstein, M. P. (2012). Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences, 109(48), E3314-E3323.

advertisement
More from Nicolas Davidenko Ph.D.
More from Psychology Today