My Account

Slide Sessions

Session

Time & Location

Chair

Session A

Tuesday, August 20, 1:30 – 3:00 pm, Finlandia Hall

Seana Coulson

Session B

Wednesday, August 21, 2:00 – 3:30 pm, Finlandia Hall

Jamie Reilly

Session C

Thursday, August 22, 10:45 – 12:15 pm, Finlandia Hall

Clara Martin

Slide Session A

Tuesday, August 20, 1:30 – 3:00 pm, Finlandia Hall

Speakers: Rachel Romeo, Anna Martinez-Alvarez, Claudia Männel, Linda Lönnqvist

A1 - Cortical plasticity associated with a parent-implemented language intervention

Rachel Romeo1,2, Julia Leonard1,3, Hannah Grotzinger1, Sydney Robinson1,3, Megumi Takada1, Joshua Segaran1, Allyson Mackey1,3, Meredith Rowe4, John Gabrieli1,4; 1Massachusetts Institute of Technology, 2Boston Children's Hospital, 3University of Pennsylvania, 4Harvard University

Introduction: Children’s early language experiences, including high quality parent-child interactions, are related to their linguistic, cognitive, and academic development, as well as both their brain structure and function (Romeo et al., 2018). On average, children from lower socioeconomic status (SES) backgrounds receive reduced language exposure. Recently, several parent-implemented interventions have resulted in both improved home language environments as well as increases in children’s language skills (e.g., Leech et al., 2018, Ferjan Ramirez et al., 2018). However, the neuroplastic mechanisms underlying this modification in children’s language input-output relationship are yet unknown. Methods: One hundred lower-SES 4-to-6 year-old children and their primary caregivers were randomly assigned to either a 9-week family-based intervention or a no-contact control group. The intervention centered around an interactive, culturally sensitive curriculum, during which trained facilitators led didactic small-group sessions on using responsive “meaningFULL language” to enhance children’s communication, executive functioning, and school readiness, provided in either English or Spanish. Children completed pre and post assessments of verbal and nonverbal cognitive skills, and subsets of each participant group additionally completed two full days of auditory home language recording (with LENA) and structural neuroimaging, from which longitudinal cortical thickness changes were calculated using Freesurfer. Results: Controlling for baseline measures, families who completed the intervention exhibited significantly more adult-child conversational turns than families assigned to the control group; however, there was still a wide variation in response. A 3-way interaction revealed that within the intervention group only, the magnitude of change in conversational turn-taking was positively correlated with increases in children’s receptive and expressive language scores. Furthermore, change in turn-taking was significantly positively correlated to cortical thickening in language-related left inferior frontal regions, as well as social-related right supramarginal regions. Conclusions: This study provides the first evidence of neural plasticity as a result of perturbations in children’s early language environments. Results suggest that the neural mechanisms underlying the effect of parent-implemented language interventions on improve children’s language skills may lie in cortical plasticity of both canonical language and social regions during development. These findings have translational implications for social, educational, and clinical policies involving early intervention.

A2 - Neural networks of non-adjacent rule learning in infancy

Anna Martinez-Alvarez1, Judit Gervain1, Elena Koulaguina2,3, Ferran Pons2,3,4, Ruth De Diego-Balaguer2,3,4,5; 1CNRS-Université Paris Descartes, 2University of Barcelona, 3Cognition and Brain Plasticity Unit, 4Institute for Brain, Cognition and Behaviour, 5ICREA, 6

One essential mechanism advocated to underlie infant grammar acquisition is rule learning. Previous research investigated the neural networks of repetition-based rule learning (ABB; e.g., “mubaba,” “penana”) in neonates and found increased responses to repetition sequences in temporal and left frontal regions (Gervain et al., 2008). However, the learning of rules involving non-adjacent elements in the absence of repetition-based cues (AXB; e.g., “pel wadim rud” “pel loga rud”) is only observed after the first year of life (Gómez & Maye, 2005). Recent proposals account for this developmental trajectory postulating that infants’ attentional system may support language development (de Diego-Balaguer et al., 2016). The present study reports four experiments, in which we test the hypothesis that prosodic cues promote the learning of non-repetition based regularities in infancy. Prosodic cues (e.g. pitch manipulation) are used as a proxy of exogenous attention capture already present in early infancy. We predict that the use of exogenous attention mechanisms will allow young infants to learn the rules. In two behavioral and two fNIRS experiments, we presented 8-10-month-old infants (n = 83) with sequences containing an AXB-type structure, where A and B predict one another with certainty (“pedibu”, “pegabu”) or a random control structure (“dibupe”, “bugape”). The stimuli either contained or lacked pitch cues in the dependent (A and B) elements. Infants’ rule discrimination was measured behaviorally using a Central Fixation Procedure and infants’ brain activity (hemodynamic response) was measured in the temporal, parietal, and frontal lobes using functional near-infrared spectroscopy (fNIRS). In the absence of pitch cues, behavioural results show that infants are unable to discriminate rule-following from random control structures. At a neural level, a larger activation (oxyHb) is observed in temporal areas but no difference between conditions (rule vs. no rule) arises, suggesting that infants’ brain processes the auditory stimuli similarly in both conditions. However, in the presence of prosodic cues highlighting the elements to be learned, infants show successful rule learning behaviourally and a significant larger activation (oxyHb) for the rule condition is observed in bilateral temporal and frontal areas. These results suggest that infants’ use of prosodic cues present in the input facilitates their learning of rules. This study contributes to our understanding of the brain substrates of rule learning suggesting that the powerful attention system infants are equipped with early in life may assist language learning.

A3 - Basic acoustic features of the learning context shape infants’ lexical acquisition

Claudia Männel1,2,3, Hellmuth Obrig1,2, Arno Villringer1,2, Merav Ahissar4, Gesa Schaadt1,2,3; 1Medical Faculty, University of Leipzig, 2Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 3Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 4Department of Psychology, The Hebrew University of Jerusalem

In language acquisition, infants benefit from salient acoustic marking and repetition of the words to be learned. However, beyond the features of the learning items themselves, basic acoustic and phonological characteristics of the learning context have not been examined in detail for lexical acquisition. Those contextual features may be highly relevant, given that speakers tend to provide constant sentence frames and frequent words in the learning context when teaching infants new words. The current event-related brain potential (ERP) study examined in two experiments the processing benefit from repeated contextual information, which might act as an anchor for the encoding and later recognition of learning items. In the first experiment, we probed repeated acoustic information (i.e., constant pitch marking of syllables) as learning context, and in the second experiment repeated phonological information (i.e., constant syllables). In Experiment 1, infants at 6.5 months (N = 30) were familiarized with syllable pairs in two blocks with a constant acoustic context (i.e., first syllable with constant pitch marking) and two blocks with a variable context (i.e., first syllable with variable pitch marking), each comprising 40 stimulus pairs. Importantly, the second syllables represented the learning items and were identical across conditions, thus enabling the evaluation of ERP responses to the same stimuli preceded by either constant or random first stimuli. In Experiment 2, 10-month-olds (N = 28) were familiarized with syllables pairs in two blocks with a constant phonological context (i.e., constant first syllable) and in two blocks with a variable context (i.e., variable first syllable). In both experiments, each familiarization block was followed by a test phase contrasting ERP responses to familiarized versus novel syllables (Experiment 1) and pseudo-words containing the familiarized versus novel syllables (Experiment 2). Here, differential ERP responses would indicate the recognition of previously presented stimuli and reveal whether recognition is modulated by familiarization condition (i.e., constant vs. variable context). During familiarization, ERP results across experiments revealed more pronounced responses for the second syllables presented in the constant context than in the variable context. This implies that physically identical stimuli are differently processed depending on their stimulus environment. ERP results at test revealed a modulation of familiarity recognition, as infants only showed ERP differences between novel and familiar stimuli, when the latter were previously heard under constant context conditions. Together these results indicate that repeated contextual information, whether acoustic or phonological in nature, acts as an anchor guiding infants’ attention towards the processing of subsequent stimuli. Importantly, this enhanced processing seems to boost infants’ later recognition of learning items, pointing to the relevance of constant contextual information in language acquisition.

A4 - Brain Responses to Speech Sound Changes are Associated with the Development of Prelinguistic Skills in Infancy

Linda Lönnqvist1, Paula Virtala1, Eino Partanen1, Paavo H. T. Leppänen2, Anja Thiede1, Teija Kujala1; 1Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, 2Department of Psychology, University of Jyväskylä, Finland

Neural auditory processing and prelinguistic communication, such as the use of vocalizations, facial expressions and gestures for communicative purposes, builds the foundation for later language development. Children who later develop language impairments may exhibit difficulties in either or both of these two abilities at an early age. However, the associations between neural auditory processing abilities and the development of prelinguistic communication skills are not well known. The interplay of these two abilities needs to be further elucidated data in order to detect infants at highest risk of language development delays and advance preventive interventions. Optimally, this should be done using longitudinal data sets and methods suitable for longitudinal analyses. The study investigated the relationship between neural speech sound processing at six months of age and the development of prelinguistic communication skills between six and 12 months of age in approximately 90 infants. Neural speech sound processing, specifically, cortical discrimination of speech-relevant auditory features, was studied using electroencephalography (EEG). We recorded mismatch responses (MMRs) to changes of the frequency, vowel duration, or vowel identity of the second syllable in the pseudoword /ta-ta/. Prelinguistic communication skills at six and 12 months of age were assessed with the parental questionnaire Infant-Toddler Checklist (ITC). To examine the association between the level of MMR amplitudes and the change in prelinguistic skills, we used a variant of a structural equation model (SEM), the latent change score (LCS) model. We opted for a method explicitly modelling intra-individual change of prelinguistic skills in order to correctly capture the longitudinal nature of the data set. The preliminary results of the LCS model suggested that a large amplitude of the MMR for the frequency deviant was associated with a large positive change of prelinguistic skills between six and 12 months of age. To check the robustness of the results, we also built a simple correlational model, showing that the amplitude of the MMR for the frequency deviant was positively associated with the level of prelinguistic skills at 12 months of age. Overall, our results suggest that neural auditory processing of speech sounds is associated with the development and level of prelinguistic communication skills. Neural auditory processing could therefore be a promising neural marker for prelinguistic development.



Slide Session B

Wednesday, August 21, 2:00 – 3:30 pm, Finlandia Hall

Speakers: Nikki Janssen, Olga Dragoy, Nitin Tandon, Roeland Hancock

B1 - Subtracts of the arcuate fasciculus mediate conceptually driven generation and repetition of speech

Nikki Janssen1,2, Roy P.C. Kessels1,2,5, Rogier B. Mars1,4, Alberto Llera1,4, Christian F. Beckmann1,3,4, Ardi Roelofs1; 1Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 2Department of Medical Psychology, Radboud University Medical Center, 3Radboudumc, Donders Institute for Brain, Cognition and Behaviour, Department of Cognitive Neuroscience, 4Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB), University of Oxford, 5Vincent van Gogh Institute for Psychiatry, Venray, the Netherlands, 6

Recent tractography and postmortem microdissection studies have shown that the left arcuate fasciculus (AF), a major fiber tract for language, consists of two subtracts directly connecting temporal and frontal cortex. These subtracts link posterior superior temporal gyrus (STG) versus middle temporal gyrus (MTG) to posterior inferior frontal gyrus. It has been hypothesized that the subtracts mediate different functions in speech production, but direct evidence for this hypothesis is lacking. To functionally segregate the two segments of the AF with different hypothesized functions we combined functional magnetic resonance imaging (fMRI) with diffusion tensor imaging (DTI) tractography. We determined the functional roles of the STG and MTG subtracts using two prototypical speech production tasks, namely spoken pseudoword repetition (PR) and verb generation (VG). Overt repetition of aurally presented pseudowords was assumed to activate areas involved in sublexical mapping of sound to articulation and the STG segment of the AF. In contrast, overt generation of verbs in response to aurally presented nouns was expected to activate areas associated with lexical-semantic driven production and the MTG segment of the AF. Task-based activation nodes then served as seed regions for probabilistic tractography. Fifty healthy adults (25 women, range 19–75 years, all right-handed) underwent task-fMRI and multishell diffusion weighted imaging. Within the major frontal and temporal activation clusters the peak voxels were identified for each task, resliced to the native space of each subject’s DTI data, and enlarged to a sphere with a radius of 6 mm. Tractography was then performed using a probabilistic tractography algorithm implemented in FSL (probtrackx) and CSF segmentations in temporal and frontal regions, acquired through FSL FAST, were used as exclusion masks. We then zoomed in on the region where the fMRI-based tracts arc around the lateral sulcus and calculated the locality of peak sample count of the group probability maps of each tract. For validation purposes, we subsequently performed Linear Discriminant Analyses (LDA) using as features the (x,y,z) coordinates of the peak voxel count within the arc of the AF for each task and each subject and assessed the performance using Leave One Out (LOO) cross validation. In the temporal lobe, PR and VG were associated with areas of activation in the left STG and left MTG, respectively, and both tasks showed activation in BA44. Fiber tracking based on these temporal and frontal fMRI-based seeds revealed a clear segmentation of the left AF into two subtracts. In addition, the LDA resulted in a mean classification accuracy of 82.9% (STD = 0.29), demonstrating that the location of peak sample count within the AF contains discriminative power to distinguish which of both tasks was being performed. Our findings corroborate evidence for the existence of two distinct subtracts of the AF with different functional roles, namely sublexical mapping of sound to articulation by the STG-tract and conceptually driven generation of words by the MTG-tract. Our results contribute to the unraveling of a century-old controversy concerning the functional role in speech production of a major fiber tract involved in language

B2 - Functional specificity of the left frontal aslant tract: evidence from intraoperative language mapping

Olga Dragoy1,2, Andrey Zyryanov1, Oleg Bronov3, Elizaveta Gordeyeva1, Natalya Gronskaya4, Oksana Kryuchkova5, Evgenij Klyuev6, Dmitry Kopachev7, Igor Medyanik6, Lidiya Mishnyakova8, Nikita Pedyash3, Igor Pronin7, Andrey Reutov5, Andrey Sitnikov8, Ekaterina Stupina1, Konstantin Yashin6, Valeriya Zhirnova1, Andrey Zuev3; 1National Research University Higher School of Economics, Moscow, 2Federal Center for Cerebrovascular Pathology and Stroke, Moscow, 3National Medical and Surgical Center named after N.I. Pirogov, Moscow, 4National Research University Higher School of Economics, Nizhny Novgorod, 5Central Clinical Hospital of the Presidential Administration of the Russian Federation, Moscow, 6Privolzhsky Research Medical University, Nizhny Novgorod, 7N.N. Burdenko National Scientific and Practical Center for Neurosurgery, Moscow, 8Federal Centre of Treatment and Rehabilitation of the Ministry of Healthcare of the Russian Federation, Moscow

The left frontal aslant tract (FAT), a frontal intralobular white-matter pathway, connecting the posterior regions of the superior and inferior frontal gyri, has been proposed to be relevant for language, and specifically – for speech initiation and fluency. Individuals with stroke (Kinkingnehun et al., 2007; Basilakos et al., 2014), tumor (Bizzi et al., 2012; Chernoff et al., 2018) and primary progressive aphasia (Catani et al., 2013; Mandelli et al, 2014) showed reduction of spontaneous speech production whenever the left FAT was involved. A few recent studies combined intraoperative direct electrical stimulation (DES) with white matter reconstructions to tap into linguistic relevance of the left FAT (Fujii et al., 2015; Kinoshita et al., 2015; Sierpowska et al., 2015; Vassal et al., 2014). However, convincing evidence that DES of the FAT affects specifically spontaneous speech initiation, and not a general language production ability, was missing. The aim of this study was to test linguistic functional specificity of the left FAT in the awake surgery settings. Ten consecutive patients (three female; age range 25-64, M=41 y.o.) underwent awake craniotomy with language mapping for removal of a brain pathological tissue (9 primary brain tumors, WHO grade 1-4, and 1 focal cortical dysplasia) in proximity of the left FAT. Two language tasks were used in combination with cortical DES: picture naming – a standard and widely used test for intraoperative language production mapping; and sentence completion, tapping more specifically into spontaneous speech initiation. Diffusion-tensor imaging sequences were acquired for all patients preoperatively, using 3T or 1.5T scanners (64 directions, 2.5 or 3 mm isovoxel, b=1500 or 1000 s/mm2, two repetitions with opposite phase encoding directions). After preprocessing in FSL (Jenkinson et al., 2012) and ExploreDTI (http://www.exploredti.com) using the deterministic diffusion tensor imaging approach, the left FAT in each of the patients was reconstructed in TrackVis (http://www.trackvis.org) manually. The language-positive sites revealed during the intraoperative procedure were further mapped onto those individual reconstructions. Intraoperative stimulation of the exposed cortex in all ten cases resulted in language-positive sites, with a task dissociation revealed. Some sites were predominantly responsive to sentence completion: being stimulated on those, patients could not complete a sentence, but were able to name a picture. Overlaying the revealed language-positive sites, which were specifically responsive to sentence completion, and not to action naming, on tractography reconstructions demonstrated that all of them were located precisely on individual cortical terminals of the FAT in the superior and/or inferior frontal gyri. Direct electrical stimulation of the left FAT was associated with a specific language impairment – inability to complete sentences, in contrast to a spare ability to name a picture. This proves linguistic functional specificity of the left FAT as a tract underlying spontaneous speech initiation and suggests the sentence completion task as an adequate tool for intraoperative functional mapping of the FAT. The study was supported by the Russian Foundation for Basic Research (project 18-012-00829) and by the RF Government grant (ag. No. 14.641.31.0004).

B3 - A new essential language site revealed by direct cortical recordings and stimulation

Nitin Tandon1,2, Kiefer J Forseth1; 1McGovern Medical School, 2Memorial Hermann Hospital

Functional maps of eloquent cortex have assigned major roles to the inferior frontal gyrus, distributed substrates in the temporal lobe, and mouth sensorimotor cortex. These architectures for language production have been primarily informed by data from behavioral responses, lesion mapping, and functional imaging – methods without access to rapid, transient, and coordinated neural processes. In contrast, human intracranial electrophysiology is uniquely suited to study the network dynamics involved in cognition with full spectrum recordings of cortical oscillations at millimeter spatial and millisecond temporal resolution. Furthermore, these recordings afford us the opportunity to causally interact with cortex through the injection of targeted current, mimicking transient focal lesions. In a large cohort, we use both passive recordings and active modulations of cortical function to generate a complete characterization of the language network. These results delineate and emphasize an under-appreciated, yet essential, node in the broader network: the dorsolateral prefrontal cortex. We collected data in 201 patients undergoing language mapping (awake craniotomy, n=60, subdural grids, n=49; stereotactic depths, n=92) with CSM (1-10mA, 50Hz, 2s) and/or intracranial electrophysiology. Language function was evaluated with a battery of tasks including visual picture naming and auditory naming to description. Stimulation-induced depolarization and electrode recording zones were transformed onto the pial surface with a current spread model to generate subject-specific functional map. CSM at the group-level revealed five regions which consistently disrupted both auditory and visual naming function. These regions were also identified using electrophysiology and are listed in their temporal sequence of engagement - middle fusiform gyrus, inferior frontal gyrus, dorsomedial prefrontal cortex, superior temporal gyrus, and posterior middle temporal gyrus. Gamma (60-120 Hz) power during task performance was strongly predictive of functional classification by CSM (p<0.001). In particular, we found that the dorsolateral prefrontal region was active prior to articulatory onset and its stimulation was consistently disruptive to domain-general naming. This analysis, integrating essential surgical planning tools, constitutes a significant advance in large-scale multimodal population-level maps of human language. The results motivate further investigation of the role of dorsolateral prefrontal cortex in language production. An analysis of the impact of resections very proximate to this site, on language production, is underway.

B4 - Genetic Differentiation of Dorsal and Ventral Language Processes

Roeland Hancock1; 1University of Connecticut

A major goal of neurobiological studies of language processing is to delineate the functional architecture of the multiple, hierarchically interacting neural systems that underlie human language capacity, and to identify the biological factors that regulate the development of these systems. We used genetic correlation to investigate how shared genetic factors may contribute to covariance in language-related task fMRI activation between the left inferior frontal cortex (IFC), spanning classical Broca's area, and the rest of the left hemisphere. The results broadly provide novel support for a dorsal/ventral dual stream model of language processing, with dorsal and ventral streams having distinct genetic influences, yet also raise questions about the role of premotor cortex (PMC) and the anterior temporal lobe (aTL) within language networks. The IFC was partitioned into 4 clusters by spectral clustering and Calinski-Harabaz score. A cluster spanning BA44/45 largely reproduced current models of dorsal stream language architecture, with significant genetic similarities between IFG, posterior STS, perisylvian cortex and angular gyrus. A cluster spanning inferior BA45 and posterior BA47 was suggestive of a ventral language stream, having significant genetic similarities with middle temporal gyrus/TE2. These results provide novel confirmation of current understanding of language networks, showing that a broad dorsal/ventral stream model is also supported by genetic differentiation of the two streams. In contrast to the expected dorsal architecture, we found that activation in PMC was not genetically similar to adjacent BA44. Instead, PMC activity, along with a posterior temporal region, was genetically similar to a cluster spanning BA47/BA45. This result is also inconsistent with parcellations of the IFG based on genetic similarity in cortical morphology (Cui et al., 2016), which placed PMC with BA44. This suggests that functional architecture is often, but not always, consistent with underlying genetic architecture, and points to the importance of understanding the neural architecture of language processing at multiple biologically sensitive levels. This analysis also revealed putative parcellation of sensorimotor integration and lexico-semantic networks of distinct shared genetic variance. Methods Functional activation (beta) values from a narrative comprehension fMRI task (Binder 2011) were obtained from preprocessed Human Connectome Project (HCP) young adult data (Barch et al., 2013). The related individuals from the larger HCP1200 sample were split into test and validation samples (approximately 240 twin pairs in each) to verify the reliability of clusters. Within each sample, beta values were adjusted for sex and age, and z-transformed. Genetic correlations were estimated between each vertex within left BA44, BA45, BA47 and FOP, and every other vertex in the left hemisphere using a bivariate additive-environmental model that partitioned variance into additive genetic and environmental components. Permutation tests were used to identify regions of significant genetic similarity at P <.05.



Slide Session C

Thursday, August 22, 10:45 – 12:15 pm, Finlandia Hall

Speakers: Oscar Woolnough, Angela Grant, Pantelis Lioumis, Maria Spychalska

C1 - Functional Architecture of the Ventral Visual Pathway for Reading

Oscar Woolnough1, Cristian Donos1, Patrick Rollo1, Simon Fischer-Baum2, Stanislas Dehaene3,4, Nitin Tandon1,5; 1University of Texas Health Science Center at Houston, 2Rice University, 3INSERM-CEA Cognitive Neuroimaging Unit, 4College de France, 5Memorial Hermann Hospital, Texas Medical Center, 6

Visual word reading is believed to be performed by a hierarchical system with increasing sensitivity to complexity, from letters to morphemes and whole words, progressing anteriorly along the ventral cortical surface and culminating in the visual word form area (VWFA). The VWFA has been implicated in sub-lexical processing but its precise role remains controversial. The lack of temporal resolution in functional imaging studies and problems with source localisation with non-invasive electrophysiological measures has led to an incomplete understanding of the functional roles of visual word regions. Here, we used direct recordings across the ventral visual pathway in a large cohort to create a spatiotemporal map of visual word reading. Word reading experiments were performed in 48 patients undergoing semi-chronic implantation of intracranial electrodes for localising pharmaco-resistant epilepsy. Each patient performed a set of experiments testing sub-lexical processing (false-fonts, letter strings of varying sub-lexical complexity and words), lexical processing (single word reading of words and pseudowords), and higher order language (jabberwocky and real sentences). Broadband gamma activity (70-150Hz) from electrodes localised to the ventral cortical surface (n>600) was used to index local neural processing. We found, (i) contrary to fMRI studies, no evidence of a posterior-to-anterior complexity gradient but instead a sharp transition between preferential activation to false-fonts and a word selective region in the mid-fusiform. Non-negative matrix factorisation showed two distinct response profiles, prioritising either novel, low probability stimuli or word-like stimuli in different spatial clusters. (ii) Contrasts of real and jabberwocky words during tasks requiring word engagement revealed two lexical processing regions: mid-fusiform and lateral occipitotemporal gyrus. However, no distinctions of this kind were seen in these regions while passively viewing the words in a pattern detection task, thereby suggesting task related modulation of these regions by higher language areas. (iii) During sentence reading, activity in the mid-fusiform was driven primarily by word frequency and to a lesser extent by word length. These effects were seen equally in both sentences and unstructured word lists, dissociating frequency from predictability. The frequency effect was also evident in occipitotemporal gyrus, to a lesser extent, establishing in both regions ~160 ms after word onset. Bigram frequency, orthographic neighbourhood and number of morphemes, syllables or phonemes did not significantly affect activity in any ventral region. In conclusion, we have identified and characterised at least two spatially separable ventral word regions that perform distinct roles in reading: lateral occipitotemporal gyrus and mid-fusiform cortex. We have shown these regions are task modulated and sensitive to the statistics of natural language, reflecting diverse influences from bottom-up vs top-down processes. This highlights the critical need for evaluating network behaviour rather than purely local activation when characterising language processes.

C2 - From structure to function: A multimodal analysis of bilingual speech in noise processing

Angela Grant1,4, Shanna Kousaie2,4, Kristina Coulter1,4, Shari Baum2,4, Vincent Gracco2,3,4, Denise Klein2,4, Debra Titone2,4, Natalie Phillips1,4; 1Concordia University, 2McGill University, 3Yale University, 4Centre for Research on Brain, Language and Music

Speech comprehension in noise is difficult, especially in a second language (L2). Previous fMRI work suggests that regions such as the inferior frontal gyrus (IFG) and angular gyrus (AG) are sensitive to manipulations of both comprehensibility and predictability. In addition, the gray matter volume (GMV) of the IFG and Heschl’s gyrus (HG) has been demonstrated to correlate positively with performance on speech in noise tasks. In our study, we build on this literature to investigate how gray matter volume in these regions may predict an electrophysiological (EEG) signature of speech comprehension, the N400. We collected T1-weighted structural brain images and EEG recordings from a sample of 28 young (M age = 25; SD = 4.3) highly proficient English/French bilinguals (M L2 AoA = 3.9; SD=3.5). During EEG recording, participants heard sentences that varied in their semantic constraint, such as “The secret agent was a spy” or “The man knew about the spy” in both languages and in both noise (16-talker babble) and quiet. After each sentence, participants repeated the final word. Only sentences where the final word was produced correctly were analyzed. EEG data were pre-processed in BrainVision and N400 amplitude from 300-500ms for each condition was extracted for statistical analysis. Structural MRI data were pre-processed using the CIVET pipeline and gray matter volume was extracted from AG, HG, and IFG pars orbitalis, triangularis, and opercularis as defined by the AAL atlas. Extracted N400 amplitude and GMV information were combined in R and linear mixed effect models for each ROI were estimated using lme4. Each model estimated the contextual N400 effect (Low Constraint – High Constraint) as a function of GMV in that region, Listening Condition, Language (L1/L2), and Years of L2 Experience. We additionally included total intracranial volume as a fixed effect, and included random intercepts for each participant with slopes that varied as a function of language and listening condition. Our analyses found 4-way interactions between GMV, Years of L2 Experience, Language, and Listening Condition in bilateral AG and IFG pars triangularis, as well as left IFG pars opercularis and right HG. When predicting the N400 effect in the L2, we observed two patterns. One pattern was indicative of more efficient processing, such that less GMV was associated with a larger N400 effect. The second pattern was indicative of increased sensitivity, such that more GMV was associated with a larger N400 effect. Years of L2 experience appeared to modulate which pattern was present in the data, such that participants with more L2 experience showed efficiency patterns in the bilateral IFG pars triangularis, as well as the right HG and AG. Participants with less L2 experience did not show efficiency patterns in any region, but did show sensitivity patterns in the bilateral IFG pars triangularis, left IFG pars opercularis and AG, as well as the right HG. We interpret these data as supporting and expanding the Dynamic Restructuring Hypothesis (Pliatsikas, 2019), which predicts patterns of growth followed by contraction in GMV as a function of bilingual experience.

C3 - Real-time diffusion-MRI-based tractography-guided TMS for speech cortical mapping

Pantelis Lioumis1,2, Dogu Baran Aydogan1, Risto Ilmoniemi1,2, Aki Laakso3; 1Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, 2BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Hospital, 3Department of Neurosurgery, HUS Helsinki University Hospital

At Helsinki University Hospital, we have previously developed the use of navigated transcranial magnetic stimulation (nTMS) to map cortical language areas of the brain prior to neurosurgery. This methodology is in routine use in over 40 neurosurgical centers around the world, but its clinical value could be further improved if one could distinguish between language nodes that are essential and those that are secondary. Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation method that is used to excite neuronal populations in the cortex by means of brief, time-varying magnetic field pulses. The initiation of cortical activation or its modulation depends on the background activation of the neurons, the characteristics of the coil, its position and its orientation with respect to the head. Fiber tractography based on diffusion magnetic resonance imaging (dMRI) is a powerful technique that enables non-invasive reconstruction of structural connections in the brain. Importantly, a tractography technique that has been developed at Aalto University can provide connections reliably in real-time. The new method offers novel opportunities for brain stimulation when used with TMS, leading to a new paradigm where TMS operators can find and target desired connections in the brain. We applied the two methods so that the speech cortical mapping was based on real-time tracking of connections (fiber tractography and it calculations were performed prior to the experiments). We studied the new approach on several healthy volunteers by stimulating traditional speech areas and observing their structural connections to a distant area which is not typically studied for the speech functions (i.e., supplementary motor area). Then we applied stimulations along the computed tracks, which resulted in several different kinds of naming errors; for example, anomias and semantic and phonological paraphasias were evoked. Real-time fiber-tractography-guided TMS revealed unusual cortical sites (anterior to left IFG and SMA) involved in speech processing during object naming task. The features of real-time tractography allow the TMS user to decide during the experiment how to perform the cortical mapping, allowing the studying of each case individually and taking into consideration the entire cortical mandle based if the connection of the fibers indicate so. The combination of TMS and real-time tractography can highlight different potential language-related areas and at the same time validate them. Additionally, connections to the contralateral hemisphere can be easily studied for speech cortical mapping with the new technique. Our experiments point out that the combination of TMS and real-time tractography can play an important role for studying speech both for basic research and clinical applications and pave the way for fully automated algorithm-driven procedures based on future multichannel TMS systems.

C4 - Order and relevance: revising temporal structures

Maria Spychalska1; 1University of Cologne

Conjunctive sentences reporting two past events suggest that the events happened in the order of mentioning. This phenomenon is described as “temporal implicature”. The events may be linked in some way, i.e. we have “script” knowledge regarding the natural order in which the events normally happen, e.g. “She washed her hair and dried it”. However, if the events are unrelated, the only temporal order that is implied is the order in which the events are mentioned, e.g. “Julia read a book and sang a song” reports two events that could happen in any order. Furthermore, the temporal implicature may sometimes not arise, if the order of events is not contextually relevant. It is still an open question to what extend the temporal representation of events as observed in the real life modulates the linguistic processing, in particular, whether the temporal information enters the compositional semantic representation of the linguistic input and whether it modulates the predictive processing in language. I present two ERP experiments investigating the processing of reversed order sentences in contexts where the order is relevant or irrelevant. The experimental paradigm resembles a memory game, in which participants assign points to a virtual player and read sentences describing game events. In each trial, four cards are dealt and the player flips two of them. Afterwards, the participants assign points based on the defined game rules: If the player flips two cards from the same category (animal or non-animal cards), she gets 1 point. If she flips two cards from different categories, the points depend on the cards’ order: If an animal card is flipped first, the player gets 2 points; if a non-animal card is flipped first, she gets 0 points. Subsequently, a sentence is presented word-by-word describing the game trial, e.g. “Julia has flipped a cat and a flower”. In the Correct-Order condition, sentences describe the events in the order in which they happened; in the Reversed-Order condition, the events are described in the reversed order. Reversed-Order conditions show a P600 effect relative to Correct-Order conditions at the first noun at which the order violation can be detected. This effect occurs both for cases where the order is relevant for the points’ assignment (Mixed-Category), as well as for cases where the order is irrelevant (Same-category). In addition, a modulation of the N400 by Order is observed: Reversed-Order conditions elicit a larger N400 than Correct-Order conditions. In a follow-up experiment, the points’ assignment only depends on whether the cards come from the same category. A similar P600 effect is observed for the order violation but no modulation of the N400. The experiments show that, irrespectively of whether the attention is directed towards the order as relevant in the given context, the violation of the order in the linguistic report engages reprocessing mechanisms, as indicated by the P600 effect, which can be linked to revising of the temporal representation. The N400 component appears to be only modulated by the encoded order if the order is contextually relevant.