Slide Slam O9
Right posterior temporal cortex supports integration of phonetic and talker information
Sahil Luthra1, James S. Magnuson1,2,3, Emily B. Myers1; 1University of Connecticut, 2Basque Center on Cognition Brain and Language, 3Ikerbasque - Basque Foundation for Science
Bayesian models of spoken word recognition posit that in order to accurately perceive the speech signal, listeners can condition phonetic identity on talker information (Kleinschmidt, 2019). Consistent with this, perceptual learning studies indicate that listeners can adapt to the idiosyncratic ways that different talkers produce their speech sounds, maintaining distinct sets of beliefs for distinct talkers (Kraljic & Samuel, 2007). Neuroimaging data suggest that talker-specific phonetic learning is partly supported by the right posterior temporal cortex (Myers & Mesite, 2014; Luthra, Correia, Kleinschmidt, Mesite, & Myers, 2020). This is a striking suggestion; though the right hemisphere has been implicated in talker processing (Van Lancker & Kreiman, 1987), it is thought to play a minimal role in phonetic processing, at least in comparison to the left hemisphere (Hickok & Poeppel, 2007). In the current work, we test the hypothesis that the right posterior temporal cortex supports talker-specific phonetic learning through the integration of phonetic information and talker information. Listeners (N=20) completed a lexically guided perceptual learning task during which they heard a male talker and a female talker. One talker produced an ambiguous /s/-/∫/ blend in lieu of /s/ and one produced the ambiguous fricative in place of /∫/. Functional activation was measured using fMRI and submitted to multi-voxel pattern analyses. Of interest was whether voxels in the right temporal cortex could be used to classify trials both on the basis of talker identity and on the basis of phonetic identity. We did not observe perceptual learning in our data, and follow-up experiments suggest this was attributable to our in-scanner headphones, which attenuated frequencies above 5000 Hz. Nevertheless, searchlight analyses indicated that the patterns of activation in the right superior temporal sulcus (STS) contained information both about who was talking and what phoneme they produced. We take this finding as evidence that talker information and phonetic information are integrated in the right STS. We also examined how the right-hemisphere voxels that contained information about talker identity — namely, the right superior temporal gyrus (STG) and right STS — were functionally connected to other parts of the brain. We found that when listeners were engaged in phonetic processing, there was increased connectivity between the right STG/STS seed, left hemisphere regions associated with speech perception, and right hemisphere regions associated with talker processing. Thus, our functional connectivity analysis suggests that the process of conditioning phonetic identity on talker information involves the coordinated activity of both the left and right hemispheres. Overall, this work supports a role for the right hemisphere in talker-specific phonetic processing. Our results suggest that the integration of phonetic information and talker information is achieved through two mechanisms: (1) the simultaneous encoding of phonetic information and talker information in the right STS and (2) the coordinated activity of a left-lateralized neural system for phonetic processing and a right-lateralized talker processing system. Future work will be needed to investigate whether the right STS plays a similar role in perceptual learning specifically, since we did not observe learning in the current study.