My Account

Poster B60, Tuesday, August 20, 2019, 3:15 – 5:00 pm, Restaurant Hall

Lexical information guides retuning of neural patterns in perceptual learning of speech

Sahil Luthra1, Joao M. Correia2, Dave F. Kleinschmidt3, Laura Mesite4, Emily B. Myers1,5;1University of Connecticut, 2Basque Center on Cognition, Brain and Language, 3Rutgers University, 4Harvard Graduate School of Education, 5Haskins Laboratories

Listeners make perceptual adjustments in how acoustic information maps onto internal phonetic categories. This process of phonetic recalibration can be guided by context such as lexical knowledge (Norris, McQueen, & Cutler, 2003). Myers and Mesite (2014) examined the neural basis of phonetic recalibration using fMRI. During exposure blocks, participants heard speech sounds that were ambiguous between ‘s’ and ‘sh,’ with some hearing these sounds in lexical contexts that biased them towards ‘s’ and another group towards ‘sh.’ The size of the biasing effect was subsequently measured with a categorization task on an ‘asi’-‘ashi’ continuum. As predicted, lexical context affected the subsequent placement of the category boundary, although there was considerable trial-to-trial variability in categorization of ambiguous tokens. Myers and Mesite analyzed how region-specific activity changed across as a function of the biasing context, but such an analysis cannot provide insight into how the specific pattern of activation might change as a result of phonetic recalibration. In the current study, we re-analyzed archival data from Myers & Mesite (2014), leveraging a machine learning algorithm (a support vector machine with recursive feature elimination) to examine changes in functional activation during phonetic recalibration. The classifier was trained on the multi-voxel patterns from the unambiguous endpoints of the ‘asi’-‘ashi’ continuum and then tested on patterns from ambiguous tokens taken from the middle of the continuum; in this way, we asked whether the information that was useful for discriminating between continuum endpoints was generalizable to classify the ambiguous tokens. Critically, the classifier successfully discriminated between ambiguous trials on the basis of subjects’ behavioral responses (i.e., whether the subject perceived that stimulus as a ‘s’ or a ‘sh’ on that particular trial). However, it did not achieve above-chance accuracy when these same tokens were labeled with respect to the underlying acoustics. We take these findings as evidence that phonetic recalibration involves neural recalibration. That is, the activation pattern on a given trial predicts the participant’s ultimate decision of how they heard that ambiguous stimulus. For instance, if a participant perceived a given stimulus as ‘sh’ on a particular trial, the pattern more closely resembled the patterns for unambiguous versions of ‘sh’ than those of ‘s.’ Targeted ROI analyses showed that left parietal regions (supramarginal and angular gyri) were the most informative for categorization. This finding is consistent with research suggesting a role for left parietal regions in lexical influences on phonetic processing (e.g., Gow, Segawa, Ahlfors, & Lin, 2008). Overall, the pattern of neural activity across a variety of regions, but especially left parietal areas, reflect listeners’ ultimate perception of ambiguous sounds rather than the bottom-up acoustics.

Themes: Perception: Speech Perception and Audiovisual Integration, Perception: Auditory
Method: Functional Imaging

Back