Slide Sessions

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Slide Session A

Wednesday, October 25, 1:30 - 2:30 pm CEST, Auditorium

Log In to play session recording

Chair: Suhail Matar, New York University

Convergent neural signatures of speech prediction error across space, time and frequency

Ediz Sohoglu1, Loes Beckers2, Matthew Davis3; 1University of Sussex, 2Donders Institute for Brain, Cognition and Behaviour, 3University of Cambridge

We use MEG and fMRI to determine how predictions are combined with speech input in superior temporal cortex. Previous work suggests that cortical speech representations are best explained by prediction error computations rather than the alternative ‘sharpened signal’ account (Blank and Davis 2016 PloS Biology; Sohoglu and Davis 2020 eLife). However, this previous work used an artificial listening situation in which speech was highly distorted and predictions obtained from external written cues. In the current work we explore a more naturalistic listening situation in which speech is clearly presented and predictions obtained directly from the speech signal itself i.e. based on lexical knowledge of which speech sounds are likely to be heard next for familiar words. In one MEG experiment (N=19) and one fMRI experiment (N=21), we compared neural responses to bisyllabic spoken words (e.g. beta, data, lotus, foetus) in which the first syllable strongly or weakly predicts the form of the second syllable. In addition, we compared neural responses to the same second syllables when heard in a pseudoword context (e.g. beetus, datus, lota, foeta). Pseudowords, by definition, are previously unfamiliar to participants and therefore the second syllable of these items mismatches with listeners’ predictions. Critically, computational simulations show that these experimental manipulations of prediction strength and match/mismatch lead to dissociable outcomes for sharpened signal and prediction error representations of the second syllable, enabling us to adjudicate between these two computational accounts. We measured neural responses using 306-channel MEG and 3T fMRI in separate groups of listeners while they performed an incidental (pause detection) listening task to maintain attention. Across multiple imaging modalities (MEG, fMRI), analysis approaches (univariate, multivariate), and signal domains (phase-locked, evoked activity and induced, time-frequency MEG responses), we show that neural representations of second syllables are suppressed by strong predictions when predictions match sensory input. Neural representations of the same second syllables show the opposite effect (i.e. enhanced representations following strongly than weakly-predicting syllables) when predictions mismatch with sensory input. In line with our computational simulations, this interaction between prediction strength and congruency is consistent with prediction error but not sharpened signal computations. We further show that the neural signature of prediction error occurs early in processing (beginning 168 ms after the onset of the second syllable), localises to early auditory regions (in fMRI, bilateral Hechl’s gyrus and STG) and is expressed as changes in low-frequency (alpha and theta) power. Our study therefore provides convergent neural evidence that speech perception is supported by the computation of prediction error representations in auditory brain regions. Overall, our findings show that neural computations of prediction error play a central role in the identification of familiar spoken words and perception of unfamiliar pseudowords.

A Corollary Discharge Circuit in Human Speech

Amirhossein Khalilian-Gourtani1, Ran Wang1, Xupeng Chen1, Leyao Yu1, Patricia Dugan1, Daniel Friedman1, Werner Doyle1, Orrin Devinsky1, Yao Wang1, Adeen Flinker1; 1New York University

Introduction: As a direct result of any motor action the relevant sensory system is activated. It is crucial for the brain to distinguish between sensations initiated by oneself and external ones. To address this, a fundamental neural circuit has evolved to inform sensory cortex about forthcoming actions through motor signals, which is referred to as a corollary discharge (CD). Although there is substantial evidence of CD signals in various sensory modalities and animal species, the exact source and dynamics of CD in the human auditory system remains unknown. Methods: To investigate the CD signal in human speech, we utilize the excellent spatiotemporal resolution of electrocorticography (ECoG) and acquire recordings from eight neurosurgical patients while they perform an auditory repetition task (subjects were instructed to listen and then repeat single words freely when ready). We focus our analysis on the neural activity in the high-gamma broadband range (70-150 Hz). Noting that CD, by nature, is a blueprint of the motor commands sent to auditory cortex, we present a novel directed connectivity analysis framework that allows us to study the information flow between different brain regions. We use autoregressive models to probe the Granger-causal relations between all different electrodes sampling cortex. To distill the large neural connectivity patterns into a few dominant patterns we employ unsupervised clustering. This approach elucidates dominant information flow (source and target) as well as prototypical temporal connectivity patterns (tested against permutation at p<0.05 for statistical significance). Results: We apply our directed connectivity analysis framework to the neural recordings during auditory word repetition. Our results show three distinct phases during the task likely related to comprehension, pre-articulatory preparation, and speech production. Locked to word articulation we find a distinct component peaking at -107 msec relative to articulation onset with directed influence from speech motor cortex onto auditory cortex (STG). In contrast to connectivity, high-gamma activation reveals pre-articulatory neural activity in multiple cortical regions (including pre- and post-central and inferior frontal gyrus) as well as subsequent STG suppression. However, the directed connectivity approach pin-points a directed information flow originating in ventral precentral gyrus targeting STG happening before articulation onset. The degree of this directed influence on auditory electrodes significantly predicts speech induced suppression in STG (Pearson Correlation, R=0.43, p=1.46e-4). Further, we replicate this finding across different speech production tasks: naming pictures, reading written words, naming auditory word descriptions, and completing auditory sentences. Analysis of variance of the corollary discharge's connectivity from the dorsal and ventral divisions of sensorimotor cortex and tasks reveals ventral pre-central gyrus as the source (ANOVA region main effect F(3,369)=12.48, p=9.04e-8; task main effect F(4,369)=1.02, p=0.397; no interaction F=0.53, p=0.896). Conclusions: Theoretical frameworks suggest that an auditory corollary discharge signal plays a role in enhancing our sensitivity to our own speech, and its dysfunction can contribute to experiencing auditory hallucinations. Our findings present the first direct evidence for the source and timing of this signal within the human auditory system. These results have significant implications for understanding speech motor control and investigating psychotic symptoms in humans.

How does the nature of a writing system shape the cognitive and neural mechanisms for reading acquisition?

Joanne Taylor1, Adam Jowett2, Tibor Auer3, Angelika Lingnau4, Kathleen Rastle2; 1University College London, 2Royal Holloway University of London, 3University of Surrey, 4University of Regensburg

Introduction: Reading is accomplished via two pathways, one based on sub-word information that maps print onto sound and then onto meaning, and one based on whole-word information that maps print directly onto meaning. These processes depend on different neural systems. Dorsal pathway regions, such as left inferior parietal cortex and inferior frontal gyrus, support print-to-sound mapping, whereas ventral pathway regions, such as left middle temporal and anterior fusiform gyri, in addition to left angular gyrus, support print-to-meaning mapping (Carreiras et al., 2014; Price, 2012; Taylor et al., 2013). The ‘division of labour’ (Plaut et al., 1996) between these pathways may depend on the nature of the writing system. Alphabetic writing systems, like Spanish, have systematic print-to-sound relationships and likely rely on sub-word more than whole-word information, at least early in reading acquisition. In contrast, logographic writing systems like Chinese have little systematicity between print and sound and may be more reliant on whole-word information. We tested this hypothesis using behavioural and neural measures in an experiment in which adults learned to read novel words written in two different writing systems. Method: 24 adults learned to read two sets of pseudowords over 10 days. One set was written in an alphabetic writing system, with regular print-to-sound mappings, and the other in a logographic writing system, with arbitrary print-to-sound mappings. Training tasks and post-tests at the end of training focused on print-to-sound or print-to-meaning relationships. At the end of training, neural activity was recorded using fMRI while participants made meaning judgements about trained written stimuli. Results: In post-tests, reading aloud was faster and more accurate in the alphabetic system than in the logographic system. In contrast, saying the meanings of written words was equivalently accurate for the two writing systems, but faster for the logographic system. Univariate analyses of fMRI data showed that activity was greater for the alphabetic than the logographic system in dorsal pathway regions, including left inferior frontal gyrus, inferior parietal cortex, bilateral precentral gyri, and supplementary motor area. In contrast, activity was greater for the logographic than the alphabetic system in ventral pathway regions including bilateral middle temporal gyri and right parahippocampal gyrus, as well as bilateral occipito-parietal cortex, left angular gyrus, and the precuneus. Discussion: Despite engaging in the same training tasks and achieving good performance in both writing systems, participants learned these writing systems differently. Print-to-sound mapping was superior for the alphabetic writing system, which engaged dorsal brain regions. In contrast, print-to-meaning mapping was superior for the logographic writing system, which engaged ventral brain regions and additional areas thought to be involved in meaning processing. These results demonstrate how the writing system shapes the division of labour between print-to-sound and print-to-meaning pathways in reading acquisition.

Interruptions of the left posterior occipito-temporal sulcus longitudinally support reading acquisition

Florence Bouhali1,2, Jessica Dubois3,4, Kevin Weiner5, Fumiko Hoeft1,6; 1University of California San Francisco, 2Aix-Marseille University, 3Universit&eacute; Paris Cit&eacute;, 4Universit&eacute; Paris-Saclay, 5University of California Berkeley, 6University of Connecticut

Literacy learning builds onto cognitive and neural architectures that are partially in place by the age children start literacy instruction. Many cognitive factors have been reported to predispose a child to acquire adequate reading skills, such as good phonological awareness and rapid naming abilities. At the neural level, previous research showed associations between sulcal interruptions of the left occipito-temporal sulcus (OTS) and reading skills in 10-year-old children and adults (Borst et al., 2016; Cachia et al., 2018), likely reflecting early constraints determined in utero. Here, we studied the relationship between the sulcal morphology of ventral temporal cortex (VTC) and the development of reading skills, to (i) confirm the role of left OTS interruptions as a longitudinal predictor of reading skills, and (ii) evaluate their predictive power relative to benchmark cognitive precursors of reading. We also explored whether (iii) the morphology of the left mid-fusiform sulcus (MFS) would relate to reading skills, as this sulcus is a critical microstructural and functional landmark in VTC (Grill-Spector and Weiner, 2014) and has been functionally implicated in grapheme processing for phonological decoding (Bouhali et al., 2019). To this aim, we identified the OTS and MFS in a cohort of 50 children followed longitudinally from the age of 5, at the onset of literacy instruction, to age 8, when most children have become fluent readers. Structural MRI scans and reading scores were available at both ages, with reading abilities additionally measured at age 7. Sulcal interruptions were stable across timepoints, and better detected with longitudinal processing of structural data. Consistent with previous findings (Borst et al., 2016; Cachia et al., 2018), the presence of an interruption in the posterior section of the left OTS was associated with better reading skills at age 8, but also with early and emerging reading skills at ages 7 and 5. The effect of left pOTS interruptions on reading abilities accumulated over time, as demonstrated by direct and indirect effects of pOTS interruptions on reading skills throughout the three consecutive timepoints in a serial mediation analysis. By accounting for over 22% of variance in reading skills at age 8 over demographic variables, pOTS interruptions were the strongest longitudinal predictor of reading acquisition, well above typical cognitive predictors of reading measured in kindergarten such as phonological awareness, rapid naming, letter knowledge, receptive vocabulary and non-verbal reasoning. In contrast, MFS morphology was not associated with reading. Overall, our results establish left pOTS interruptions as a robust predictor of reading acquisition, with early and accumulating effects independently of known cognitive precursors of reading.

SNL Account Login

Forgot Password?
Create an Account

News