Sangam: A Confluence of Knowledge Streams

Processing communicative facial and vocal cues in the superior temporal sulcus

Show simple item record

dc.creator Deen, Ben
dc.creator Saxe, Rebecca
dc.creator Kanwisher, Nancy
dc.date 2021-10-27T20:23:34Z
dc.date 2021-10-27T20:23:34Z
dc.date 2020
dc.date 2021-03-19T14:47:33Z
dc.date.accessioned 2023-03-01T18:06:20Z
dc.date.available 2023-03-01T18:06:20Z
dc.identifier https://hdl.handle.net/1721.1/135465
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/278765
dc.description © 2020 Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus (“fSTS”) also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.
dc.format application/pdf
dc.language en
dc.publisher Elsevier BV
dc.relation 10.1016/J.NEUROIMAGE.2020.117191
dc.relation NeuroImage
dc.rights Creative Commons Attribution 4.0 International license
dc.rights https://creativecommons.org/licenses/by/4.0/
dc.source Elsevier
dc.title Processing communicative facial and vocal cues in the superior temporal sulcus
dc.type Article
dc.type http://purl.org/eprint/type/JournalArticle


Files in this item

Files Size Format View
1-s2.0-S1053811920306777-main.pdf 4.404Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse