Slide Sessions

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Slide Session B

Thursday, October 26, 8:00 - 9:00 am CEST, Auditorium

Intracranial EEG Reveals Simultaneous Encoding of Pre-activated and Currently Processed Information During Language Comprehension

Lin Wang1,2, Benchi Wang3, Ole Jensen4, Gina Kuperberg1,2; 1Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA, 2Department of Psychology, Tufts University, Medford, MA, USA, 3South China Normal University, Guangzhou, China, 4Centre for Human Brain Health, University of Birmingham, Birmingham, UK

Introduction Prediction is thought to play a crucial role in ensuring that language comprehension is both fast and accurate. Indeed, during word-by-word comprehension, item-specific predictions can be decoded from unique temporal patterns of neural activity within the ventromedial temporal lobe, even before new bottom-up input becomes available. This raises two fundamental questions: How does the brain coordinate top-down predictive processing with bottom-up processing? And how does it segregate pre-activated representations from representations that are activated by the perceptual input? To address these questions, we combined intracranial EEG (iEEG) with Representational Similarity Analysis. Methods We collected iEEG data from 16 Chinese participants who had surgically implanted stereo-electrical electrodes as part of their clinical assessment for epilepsy (134 shafts; 2116 electrode contacts, distributed across multiple neuroanatomical regions). Participants silently read 306 sentences, presented word-by-word (300ms, 400ms ISI). These sentences were constructed in triplets, giving rise to two types of sentence pairs within each triplet: (1) Pre-target overlap pairs, which shared the identical pre-target word (e.g. 1a & 1b: “sleeping” - “sleeping”), and (2) Prediction overlap pairs that shared the identical predictable upcoming word (e.g. 1b & 1c: “baby” – “baby”). Across triplets, there was no overlap between pairs of either the pre-target word or the predicted word (No-overlap pairs). 1a. In the picture he saw a sleeping … (?) 1b. In the crib, there is a sleeping … (baby) 1c. In the hospital, there is a newborn … (baby) At each electrode contact, between 300-500ms following the onset of each pre-target word, we extracted the fine-grained temporal patterns produced by each type of pair. We first asked where item-specific representations of the pre-target words were encoded by identifying the regions where Pre-target overlap pairs produced more similar temporal patterns than No-overlap pairs. We then asked whether, within this same 300-500ms time-window, any of these regions additionally produced temporal patterns that encoded item-specific representations of the predicted targets, i.e. whether and where Predicted overlap pairs produced more similar temporal patterns than No-overlap pairs. Results (1) Between 300-500ms, unique temporal patterns encoding item-specific representations of pre-target words were produced within left ventromedial temporal lobe (left fusiform, inferior temporal and medial temporal), (b) lateral temporal cortex (left middle temporal and bilateral superior temporal), and (c) left inferior parietal cortex. (2) In this 300-500ms time-window, the left ventromedial temporal lobe also produced distinct temporal patterns that encoded item-specific representations of predicted targets. However, the specific electrode contacts that showed the largest predicted target temporal similarity effects showed the smallest pre-target effects, and vice versa. Conclusions Together, these findings point to a tightly coordinated system in which, between 300-500ms, at the same time as new bottom-up input is accessing item-specific representations across left-lateralized temporal-parietal cortices, top-down predictions are pre-activating item-specific representations within the left ventromedial temporal lobe. They further suggest that, within left ventral temporal cortex, distinct neural populations may play a role in segregating information that is pre-activated based on the prior context from information that is activated by the current perceptual input.

Deficient cortical tracking of speech in children with developmental language disorder

Anni Nora1,2, Oona Rinkinen1,2, Hanna Renvall1,2,3, Elisabet Service4,5, Marja Laasonen5,6, Eva Arkkila5, Sini Smolander5,7, Riitta Salmelin1,2; 1Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland, 2Aalto NeuroImaging (ANI), Aalto University, Espoo, Finland, 3BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, University of Helsinki and Aalto University, Helsinki, Finland, 4Centre for Advanced Research in Experimental and Applied Linguistics (ARiEAL), Department of Linguistics and Languages, McMaster University, Hamilton, Canada, 5Department of Otorhinolaryngology and Phoniatrics, Head and Neck Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland, 6Department of Logopedics, University of Eastern Finland, Joensuu, Finland, 7Research Unit of Logopaedics, University of Oulu, Oulu, Finland

In developmental language disorder (DLD), learning to understand and use spoken language is disrupted. The reason for this remains unknown, and brain imaging studies in children with DLD are sparse. One hypothesized underlying cause are acoustic-phonetic processing deficits. Using millisecond scale magnetoencephalography (MEG) recordings combined with machine learning models, we set out to investigate whether the cause of this disruption lies in poor cortical tracking of speech. The stimuli were 44 high frequency spoken Finnish words from different semantic categories (e.g. ‘dog’, ‘car’, ‘hammer’) and 44 sounds with corresponding meanings (e.g. dog bark, car engine, hammering), as well as 8 novel (pseudo)words with Finnish phonotactics. Cortical responses to 20 repetitions of each stimulus were measured with MEG in 17 children with DLD and 17 typically developing (TD) children, aged 10-15 years. The sound acoustics were modeled with time-varying (amplitude envelope and spectrogram) and non-time-varying (frequency spectrum, modulation power spectrum) descriptions. A kernel convolution model that models the evoked brain responses as time-locked to the sound was used for decoding the time-varying sound features based on the corresponding cortical responses. In both DLD children and control children, the cortical activation to spoken words was best modeled as time-locked to the unfolding speech input, whereas the cortical processing of environmental sounds did not show such reliance on time-locked encoding. Both the amplitude envelope (amplitude modulations reflecting e.g. the syllable rhythm) and spectrogram (detailed spectral content) of spoken words were very successfully decoded based on time-locked brain responses in bilateral temporal areas, best at ~100 ms latency between sound and cortical activation. Based on the cortical responses in temporal areas, the models could tell apart, at accuracy of 80-84% (using the amplitude envelope features) and 72-75% (using the spectrogram features), which of two test sounds had been presented. Group differences were found at longer latencies: The cortical representation of the amplitude envelope information was poorer in DLD children compared to TD children at ~200–300 ms lag. This latency range seems to correspond to the latencies between the observed peaks in the amplitude envelope, reflecting the syllable structure of the words. This group difference was especially evident in the right temporal cortex for familiar words and in the left temporal cortex for novel words. Thus, typically developing children seem to display more efficient encoding of the amplitude modulations of speech, reflecting the syllable rhythm. We interpret the poorer encoding at longer latencies in the DLD children as reflecting poorer retention of syllabic-level acoustic-phonetic information in echoic memory and poorer integration of information across syllables. The present results offer an underlying explanation for the impaired word-level comprehension and learning of spoken language in DLD.

From Temporal to Frontal Cortex and Back: Testing the Dynamics underlying Sentence Comprehension with TMS-EEG

Joëlle A. M. Schroën1,2, Thomas C. Gunter1, Leon O. H. Kroczek3, Gesa Hartwigsen1, Angela D. Friederici1; 1Max Planck Institute for Human Cognitive and Brain sciences, Leipzig, Germany, 2International Max Planck Research School on Neuroscience of Communication: Function, Structure and Plasticity (IMPRS NeuroCom), Leipzig, Germany, 3University of Regensburg, Regensburg, Germany

Introduction. During everyday conversation, the listener’s goal is to extract the meaning of the communicated message. This complex process is supported by the interaction within a left-dominant fronto-temporo-parietal network that consists of highly interconnected brain regions such as the left posterior inferior frontal gyrus (pIFG), left posterior superior temporal gyrus and sulcus (pSTG/STS), and left angular gyrus (AG). Currently, causal evidence for the precise timing of the interplay between nodes of this language network (where, what, when) is completely lacking. As a first effort addressing this important outstanding issue, we present a set of three carefully designed experiments that simultaneously combined transcranial magnetic stimulation (TMS) and electroencephalography (EEG) recordings. Methods. To account for the robustness of the language network [1], we adopted a so-called condition-and-perturb design. Offline conditioning by means of 40 seconds of continuous theta burst stimulation [2] was always applied over the left AG, as it is strongly connected with the left pSTG/STS and left pIFG [3]. Across the three experiments, we varied the timing of triple-pulse (10 Hz) online repetitive TMS (rTMS) to test when neural activity in the left pIFG and left pSTG/STS is causally relevant for auditory sentence comprehension. While participants listened to four-word sentences (e.g., He drinks the beer), online rTMS was applied at different latencies relative to verb onset: either early (0-200 ms), middle (150-350 ms) or late rTMS (300-500 ms). Given the presence of extremely large artifacts following TMS over lateral brain regions, we utilized the robust N400 effect [4] at the noun position as a read-out to draw inferences. Results. Our experiments show evidence for region-specific, time-critical processing windows within the language system: functional relevance was demonstrated first for the left pSTG/STS (0-200 ms), followed by the left pIFG (150-350 ms), and finally again for the left pSTG/STS (300-500 ms). The perturbation outlasted the stimulation duration and impacted the processing of the subsequent noun, indicating their crucial importance in understanding the meaning of a sentence. Conclusion. Our study shows causal, time-specific evidence for a coordinated temporal interplay within the language network during auditory sentence processing, thereby providing an important extension of the insights gained by previous electrophysiological and neuroimaging work on the neurobiology of language. References. [1] Hartwigsen, G. (2018). Flexible redistribution in cognitive networks. Trends in cognitive sciences, 22(8), 687-698. [2] Huang, Y. Z., Edwards, M. J., Rounis, E., Bhatia, K. P., & Rothwell, J. C. (2005). Theta burst stimulation of the human motor cortex. Neuron, 45(2), 201-206. [3] Niu, M., & Palomero-Gallagher, N. (2023). Architecture and connectivity of the human angular gyrus and of its homolog region in the macaque brain. Brain Structure and Function, 228(1), 47-61. [4] Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual review of psychology, 62, 621-647.

MEG evidence that modality-independent conceptual representations encode lexical but not low-level sensory features

Julien Dirani1, Liina Pylkkänen1,2; 1New York University, 2New York University Abu Dhabi

Words convey meaning but in the neurobiology of language, we lack a satisfying account of the interplay between conceptual and lexical representations. Previous research has shown that the brain encodes concepts through both abstract and sensory-motor representations, such as visual shapes, sounds, and motor representations (Ralph et al., 2017). Furthermore, it has been established that different tasks and stimulus modalities (such as pictures vs. words) activate shared representations of concepts, referred to as modality-independent representations (Devereux, Clarke, Marouchos, & Tyler, 2013; Fairhall & Caramazza, 2013; Simanova, Hagoort, Oostenveld, & Van Gerven, 2014). However, the nature of modality-independent representations remains ill-understood. While it is possible that they are equivalent to abstract representations, alternatively, they may possess sensory components that are shared across different contexts and stimuli. Here we use magnetoencephalography (MEG) and a novel representation learning approach to investigate the content of modality-independent concepts and how they evolve over time at the millisecond level. 17 native English speakers participated in an MEG experiment involving an animacy judgment task with randomly presented pictures and words of animals and tools. In the first part of the analysis, we identified the time-points in the MEG data where modality-independent representations of concepts were activated. We used a decoding approach that trained a 3- layer neural network classifier of exemplar-level concepts (e.g. dog, fork) on MEG data from one modality (words) and tested it on MEG data from the other modality (pictures) at each timepoint. This enabled us to extend previous research conducted at the level of semantic categories to the exemplar level (Dirani & Pylkkänen, under review). To investigate the content of modality- independent representations, we extracted the hidden representations learned by the models that demonstrated robust generalization across modalities. The rationale is that models that generalized across modalities learned a representation of the MEG data that supported classification and that is independent of stimulus modality. We conducted a representational similarity analysis (RSA) to unpack the content of these modality-independent representations and to examine their relation to existing models of vision (He et al., 2016), lexical processing (Balota et al., 2007), and conceptual knowledge (McRae et al, 2005). Our results show that pictures activate exemplar-level representations at around 75ms, while for words these representations were activated at ~150ms. Cross-condition decoding was significantly above chance starting around 200ms after stimulus onset, however, the timing of this decoding varied considerably across participants. The RSA revealed significant correlations between modality-independent representations and human-normed semantic features throughout most of the analysis time-window (200-600ms). Crucially, while no significant correlation was observed between modality-independent representations and low-level visual representations, modality-independent representations correlated with representations capturing the lexical statistics of the exemplar names initially at 200ms and later at 400-500ms. Overall, our results suggest that modality-independent conceptual representations do not encode low-level sensory representations but contain representations that capture the lexical statistics associated with their names. These findings contribute to our understanding of how meaning is represented and processed across different modalities, highlighting the connection between language processes and conceptual knowledge.

SNL Account Login

Forgot Password?
Create an Account