Individual differences in cortical gray matter volume reflect differences in speech sound encoding and phoneme categorization
Grace Gervino1, Joseph Toscano; 1Villanova University
Individual differences are prevalent in language processing. An open question concerns the extent to which differences in brain structure can be mapped to behavioral and physiological measures of language comprehension. We analyzed data from Toscano et al. (2018), which includes structural MRI data (T-1 weighted images), as well as EEG and behavioral data for sounds varying along a /b/-/p/ voice onset time (VOT) continuum. We measured correlations between gray matter volume (GMV) and the two other measures: (1) the slope of the listener’s VOT categorization function, providing a measure of how strongly their behavioral responses are shaped by phoneme categories, and (2) the amplitude of the auditory N1 ERP component, which varies linearly with changes in VOT and provide a measure of early acoustic cue encoding. We predicted that areas involved in the bottom-up processing of speech, the superior temporal gyrus (STG), the planum temporale (PT), and Heschl’s gyrus (HG), would be positively correlated with the slope of the N1, as greater GMV could allow for more precise acoustic encoding. We also predicted that areas that may be involved in top-down feedback to early speech areas, including the medial temporal gyrus (MTG) and inferior frontal gyrus (IFG), would be negatively correlated with the slope of the N1, as listeners with greater GMV in these areas would be more susceptible to top-down effects and show poorer bottom-up cue encoding. Lastly, we predicted that MTG and IFG would be positively correlated with the slope of the categorization function (i.e., larger GMV in these areas would result in more discrete behavioral responses) and predicted negative correlations for PT and HG. The T-1 weighted images were filtered, corrected, segmented, skull-stripped, and parcellated before volumes were calculated and segments were normalized to a reference space using DARTEL. As predicted, we found that GMVs of IFG (left IFG: r=-0.59; right IFG: r=-0.47) and MTG (left MTG: r=-0.31; right MTG: r=-0.47) were negatively correlated with the slope of the N1, whereas GMVs of HG (left HG: r=0.14; right HG: r=0.35) and PT (left PT: r=0.44; right PT: r=0.17) were positively correlated with the slope of the N1. Surprisingly, we found that STG was negatively correlated with the slope of the N1 (left STG: r=-0.08; right STG: r=-.20), though this is consistent with some intracranial work arguing that STG represents phonemes rather than acoustic cues. As expected, GMVs of HG (left HG: r=-0.42; right HG: r=-0.41) and PT (left PT: r=-0.26: right PT: r=-0.57) were negatively correlated with the slope of the categorization function. In contrast to our predictions, GMVs of MTG (left MTG: r=-0.57; right MTG: r=-0.43) and IFG (left IFG: r=-0.23; right IFG: r=-0.07) were also negatively correlated with categorization slope. More work is needed to better understand the relationship between GMV and listeners’ categorization functions. Nevertheless, our results suggest that listeners with greater GMV in PT and HG were more precise in their acoustic encoding, whereas listeners with greater GMV in MTG and IFG were more susceptible to top-down effects, resulting in poorer bottom-up encoding.
Topic Areas: Speech Perception, Perception: Auditory