Poster B7, Thursday, August 16, 3:05 – 4:50 pm, Room 2000AB
Neural representation of phonemic categories in tonotopic auditory cortex
Deborah F. Levy1, Stephen M. Wilson1;1Vanderbilt University Medical Center
How do our brains transform continuously varying acoustic signals into categorical phonemes? Categorical perception is one of the most fundamental processes in speech perception, but it is not yet known where in the auditory processing stream representations of speech sounds cease to be veridical (faithfully representing the exact acoustic properties of the stimulus) and become categorical (representing sounds as linguistic categories). In this study, we used functional MRI and multivariate pattern analysis to investigate the representation of vowels in tonotopic primary auditory cortex (PAC). We addressed two questions: (1) Can phonologically contrastive but acoustically similar vowel phonemes be distinguished from one another based on neural activity in PAC? (2) Is there any evidence that differential sensitivity to phonological boundaries may begin to emerge at the level of PAC? We scanned fifteen participants with 7 Tesla fMRI. First, participants’ individual categorical boundaries for synthetic vowels on a spectrum from [i] to [ɪ] were determined behaviorally using identification and discrimination tasks. Then, for each participant, four vowels that were equidistant in acoustic space but perceptually grouped into two phonemic categories were generated. Next, tonotopic maps were created for each participant based on phase-encoded analysis of passive listening to non-linguistic frequency sweeps. The bounds of PAC were defined both functionally and anatomically as tonotopic voxels falling within Heschl’s gyrus. Each participant’s four vowels were then presented in a block design with an irrelevant but attention-demanding level change detection task. Finally, we used multivariate pattern analysis to determine (1) whether the endpoint, prototypical vowels could be distinguished from each other using information contained in PAC; and (2) whether the vowel pair that crossed the categorical boundary would be more neurally discriminable than the equally spaced vowel pairs that did not cross the boundary and so fell within the same perceptual category. We found that participants’ endpoint [i] and [ɪ] tokens could be robustly distinguished from each other using neural data contained in PAC (mean classifier accuracy=64.6%, t(14)=5.54, p<0.001, one-tailed) and that discrimination was better between pairs of vowels that crossed the categorical boundary than those that did not (mean difference in classifier accuracy=4.8%, t(14)=2.15, p=0.025, one-tailed). These findings demonstrate that even acoustically similar phonemes can be discriminated based on information contained within PAC. Previous research using intracranial electrocorticography has provided some evidence that speech sounds may be represented categorically in the lateral superior temporal gyrus. Our findings suggest that PAC, an upstream region often thought to purely represent the acoustic properties of a stimulus, may already be warping acoustic representations towards phonemic, linguistically relevant categories.
Topic Area: Perception: Speech Perception and Audiovisual Integration