Poster C32, Friday, August 17, 10:30 am – 12:15 pm, Room 2000AB

Decoding another’s feeling of knowing from spoken language with multivariate pattern analysis on fMRI

Xiaoming Jiang1,2, Ryan Sanford3, Marc D. Pell2,3;1Department of Psychology, School of Humanity, Tongji University, China, 2School of Communication Sciences and Disorders, McGill University, Canada, 3Montreal Neurological Institute, McGill University, Canada

Recent neuroimaging studies in spoken language have examined the neural underpinnings of a listener when judging the speaker’s feeling of (un)knowing based on the speaker’s interpersonal stance (Jiang & Pell, 2015; 2016a; 2016b; 2017; Jiang, Sanford, Pell, 2017). However, the neural mechanisms of social judgment from multiple cues, such as verbal (factual vs. false statements) and nonverbal cues (confident vs. doubtful voice) remain unclear. In this fMRI study, we sought to investigate the brain activation patterns that underlie feeling knowing judgments from factual and false statements spoken in confident and doubtful tones. We used multivariate pattern analysis (MVPA) to decode the patterns of neural activity associated with speaker confidence and truth value from general knowledge statements within functionally significant anatomical regions and functional networks that have been shown to be important in decoding social information in spoken language. These anatomical regions included the left and right superior, middle and inferior temporal gyrus, while the functional networks included the default mode network, salience network, auditory network, language network and executive control network. Eighteen participants listened to true or false statements spoken in a confident or doubtful tone and judged whether the speaker knows what he is talking about. MVPA classified the level of confidence and truth value using a support vector machine. All classifications followed a leave-one-run-out cross-validation scheme at the participant level. These analyses were performed across the areas of interest using the searchlight method. Significant brain activation patterns were identified with permutation testing using p<0.05 as the statistical threshold, followed by multiple comparison correction with a family-wise error rate of 0.01. We observed that the right superior temporal pole, and anterior cingulate cortex (ACC) and middle cingulate cortex (MCC), regions that were part of the default mode and salience network, respectively, significantly classified confident and doubtful expressions. When classifying the truth value, we observed significant activation patterns in the ACC, MCC, bilateral superior and middle temporal gyrus, precuneus, left middle frontal gyrus and right cerebellum. These regions were part of the default mode network, salience network, executive control network and language network. These results demonstrate multiple independent functional connectivity networks that underlie the decoding of how the speaker said and what was said, including those of detecting saliency of verbal and nonverbal cues and of integrating and reconciliation of the cue compatibility that support the abstraction of speaker meaning from these cues. [Reference] [1] Jiang, X., Sanford, R., Pell, D. M. (2017) Neural systems for evaluating speaker (un)believability. Human Brain Mapping, 38, 3732-3749. [2] Jiang, X. & Pell, D. M. (2017) The sound of confidence and doubt. Speech Communication, 88, 106-126. [3] Jiang, X. & Pell, D. M. (2016b). The Feeling of Another's Knowing: How "Mixed Messages" in Speech Are Reconciled”. Journal of Experimental Psychology: Human Perception and Performance, 42, 1412-1428. [4] Jiang, X., & Pell, D. M. (2016a). Neural responses towards a speaker’s feeling of (un)knowing. Neuropsychologia, 81, 79-93. [5] Jiang, X. & Pell, D. M. (2015). On how the brain decodes speaker’s confidence. Cortex, 66, 9-34.

Topic Area: Meaning: Prosody, Social and Emotional Processes