Slide Slam F3
The syllable we hear during binaural integration is represented in non-auditory cortical areas
Basil Preisig1,2,3, Lars Riecke4, Alexis Hervais-Adelman1; 1Department of Psychology, Neurolinguistics, University of Zurich, Zurich, Switzerland, 2Donders Institute for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands, 3Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands, 4Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
Binaural integration may arise during dichotic listening, when acoustically different stimuli are presented to each ear. Under some circumstances, binaural integration results in an auditory percept that is not physically present in either of the auditory stimuli. Here, we use fMRI and multi-voxel pattern analysis (MVPA) to address the question of where in the cerebral cortex syllable percepts emerge during binaural integration. Twenty-seven right-handed listeners with no history of hearing impairment (M=21.89 years, SD=3.14, 8 male) took part in an fMRI study in which one ear was presented with an ambiguous syllable (intermediate between /da/ and /ga/) and the other with an acoustic feature (third formant, F3). The contralateral F3 could be low (2.5kHz, consistent with /ga/) or high (2.9kHz, consistent with /da/). If dichotically presented information is binaurally integrated, the F3 biases the perceived syllable. Participants reported on every trial whether they heard a /da/ or a /ga/ syllable. In each of four fMRI runs (~7min each) participants heard 30 high and 30 low F3 dichotic stimuli as well as 24 unambiguous control stimuli (12 times /da/ and 12 times /ga/). In unambiguous control stimuli, a clear syllable was presented to one ear and the F3 consistent with this syllable was presented to the other ear. Hence, unambiguous control stimuli could be readily interpreted based on monaural input. We used an MVPA searchlight analysis (radius 8mm, 251 voxels, constrained to areas significantly responding to sound at the group level, p<.001 uncorrected) to identify brain areas in which there is a consistent differentiation in response pattern as a function of syllable report, that generalizes across unambiguous and ambiguous stimuli. Representational consistency was evaluated using the cross-valdiated mahalanobis (crossnobis) distance. Group-level permutation analysis revealed a number of clusters that consistently differentiate the reported syllable in both unambiguous stimuli and binaural integration stimuli. We find that BOLD activity patterns in the left anterior insula (AI), the left supplementary motor cortex, the left ventral motor cortex and the right somatosensory cortex (M1/S1) represent the syllable report of the participant. However, these categorical response patterns could be driven by the stimulus acoustics, the syllable percept, or both due to the confounding of the physical stimulus characteristics and perceptual interpretation. In follow-up analyses, we tested whether these regions carry information about the syllable percept (/da/ vs /ga/ response) alone, or rather the stimulus acoustics (high vs low F3) by recomputing the crossnobis distances between syllable reports within each stimulus class (high/low F3) alone. The converse was also calculated – the distance between each acoustic stimulus, within each syllable report (/da/-/ga/). In both cases the distances were cross-validated as previously, against the syllable reports in unambiguous stimuli. Larger crossnobis distances were found in the aforementioned areas between different syllable percepts than stimulus acoustics. The same areas have been previously implicated in perceptual decision-making (AI), response selection (SMA), and response initiation and feedback (somatosensory cortex). Our results indicate that the construction of the syllable percept during binaural integration occurs mainly in brain regions beyond the auditory cortex.