“In daily life, we usually recognize faces through sight and almost never explore them through touch,” says lead researcher Kazumichi Matsumiya of Tohoku University in Japan. “But we use information from multiple sensory modalities in order to perceive many everyday non-face objects and events, such as speech perception or object recognition — these new findings suggest that even face processing is essentially multisensory.”
In a series of studies, Matsumiya took advantage of a phenomenon called the “face aftereffect” to investigate whether our visual system responds to nonvisual signals for processing faces. In the face aftereffect, we adapt to a face with a particular expression — happiness, for example — which causes us to perceive a subsequent neutral face as having the opposite facial expression (i.e., sadness).
Matsumiya hypothesized that if the visual system really does respond to signals from another modality, then we should see evidence for face aftereffects from one modality to the other. So, adaptation to a face that is explored by touch should produce visual face aftereffects.
To test this, Matsumiya had participants explore face masks concealed below a mirror by touching them. After this adaptation period, the participants were visually presented with a series of faces that had varying expressions and were asked to classify the faces as happy or sad. The visual faces and the masks were created from the same exemplar.
In line with his hypothesis, Matsumiya found that participants’ experiences exploring the face masks by touch shifted their perception of the faces presented visually compared to participants who had no adaptation period, such that the visual faces were perceived as having the opposite facial expression.
Further experiments ruled out other explanations for the results, including the possibility that the face aftereffects emerged because participants were intentionally imagining visual faces during the adaptation period.
And a fourth experiment revealed that the aftereffect also works the other way: Visual stimuli can influence how we perceive a face through touch.
According to Matsumiya, current views on face processing assume that the visual system only receives facial signals from the visual modality — but these experiments suggest that face perception is truly crossmodal.
“These findings suggest that facial information may be coded in a shared representation between vision and haptics in the brain,” notes Matsumiya, suggesting that these findings may have implications for enhancing vision and telecommunication in the development of aids for the visually impaired.
For more information about this study, please contact: Kazumichi Matsumiya at email@example.com.
The article abstract is available online.
This work was supported by a Grant-in-Aid for Scientific Research on Innovative Areas, “Face perception and recognition” from MEXT KAKENHI (23119704) and by the Research Institute of Electrical Communication, Tohoku University Original Research Support Program to K.M.
The APS journal Psychological Science is the highest ranked empirical journal in psychology. For a copy of the article “Seeing a Haptically Explored Face: Visual Facial-Expression Aftereffect From Haptic Adaptation to a Face” and access to other Psychological Science research findings, please contact Anna Mikulak at 202-293-9300