Slide Sessions

Session

Time & Location

Chair

Slide Session A

Thursday, August 16, 1:30 – 2:50 pm, Room 2000C

Manuel Carreiras

Slide Session B

Friday, August 17, 1:40 – 3:00 pm, Room 2000C

Brenda Rapp

Slide Session C

Saturday, August 18, 11:00 am – 12:20 pm, Room 2000C

Marina Bedny


Slide Session A

Thursday, August 16, 1:30 – 2:50 pm, Room 2000C

Chair: Manuel Carreiras
Speakers: M. Florencia Assaneo, Albert Costa, Han G. Yi, Jennifer Chesters

A1 - Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning

M. Florencia Assaneo1, Pablo Ripolles1, Joan Orpella2,3,4, Ruth de Diego-Balaguer2,3,4,5, David Poeppel6; 1Department of Psychology, New York University, 2Cognition and Brain Plasticity Unit, IDIBELL, 3Department of Cognition, Development and Educational Psychology, University of Barcelona, 4Institute of Neuroscience, University of Barcelona, 5ICREA, 6Neuroscience Department, Max-Planck Institute for Empirical Aesthetics, Frankfurt

The ability to synchronize a motor output to an auditory input is a basic trait present in humans from birth with important cognitive implications. Infants’ proficiency in following a beat, for example, is a predictor of language skills. From a phylogenetic perspective, spontaneous synchronization (i.e. without explicit training) to an external rhythm is argued to be a unique characteristic of vocal learning species, including humans. The study of this distinctive attribute has typically focused on how body movements are entrained by non-speech signals - e.g. music or a beat. Here, instead, we investigate how humans spontaneously align their speech motor output to auditory speech input. To begin with, we introduce a simple behavioral task, where individuals simultaneously perceive and produce syllables, with a remarkable outcome. The general population shows two qualitatively different behaviors: while some individuals are compelled to temporally align their utterances to the external stimulus, others show no interaction between the perceived and produced rhythms. Subsequently, we investigate the neurophysiology and brain structure features underlying the segregation. First, with a magnetoencephalography protocol we show that, when passively listening to speech, synchronizers show increased brain-to-stimulus alignment over frontal areas as well as reduced rightward asymmetry in auditory cortex. Secondly, using diffusion weighted MRI technique, we find a distinct lateralization pattern in a white matter cluster -likely part of the arcuate fasciculus, pathway connecting frontal and auditory areas- that differentiated the groups, with synchronizers showing significantly greater left lateralization. Crucially, this structural difference relates to both the auditory and frontal neurophysiological results: increased leftward lateralization in the white matter was related to higher brain-to-stimulus synchrony in left frontal regions and to more symmetrical auditory entrainment. Finally, we demonstrate that the behavioral findings on audio-motor synchronization and its neural substrate have ecologically relevant consequences: the synchronizers perform better on a word learning task. In summary, the combined behavioral, neurophysiological, and neuroanatomic results reveal a fundamental phenomenon: whereas some individuals are compelled to spontaneously align their speech output to the speech input, others remained impervious to the external rhythm. Moreover, we show a deceptively simple behavioral task capitalizing on individual differences that turns out to be diagnostic of audio-motor synchronization, neurophysiological function, brain anatomy, and performance on a word-learning task. The use of such a test can help to better characterize individual performance, leading to new discoveries related to speech processing and language learning that could have been masked by pooling together populations with substantially different neural and behavioral attributes.

A2 - Active bilingualism as a cognitive reserve factor against cognitive decline

Albert Costa1,2, Marco Calabria1, Mireia Hernández1, Gabriele Cattaneo1, Mariona Serra1, Anna Suades3, Montserrat Juncadella3, Ramon Reñé3, Isabel Sala4, Alberto Lleó4, Jordi Ortiz-Gil5, Lidia Ugas5, Asunción Ávila6, Isabel Gómez Ruiz6, César Ávila7; 1Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain, 2Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain, 3Hospital Universitari de Bellvitge, L’Hospitalet de Llobregat, Barcelona, Spain, 4Neurology Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain, 5Hospital General de Granollers, Barcelona, Spain, 6Consorci Sanitari Integral, Barcelona, Spain, 7Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Spain

Introduction. There is growing evidence that bilingualism acts as cognitive reserve (CR) factor in older adults and age-related disorders. In this study we investigated the underlying cognitive and neural mechanisms which might explain such a bilingual advantage in the context of CR. Under the hypothesis that active bilingualism may lead to an advantage on cognition, we tested the efficiency of executive control (EC), attention and episodic memory in two groups of bilinguals: a) bilinguals who actively used their two languages; b) passive bilinguals (people that understand two languages but that basically speak only one of them). Also, structural neuroimaging data was acquired and compared between these two types of bilinguals. Methods. We tested three groups of participants: healthy older adults, patients with Alzheimer’s disease (AD) and patients with Mild Cognitive Impairment (MCI). ‘Active’ bilinguals were early and high proficient Catalan-Spanish bilinguals, they had a high frequency of use of their L2, and they switched between languages in their everyday life. ‘Passive’ bilinguals were Spanish speakers with exposure to Catalan (L2) and low use of their L2. 260 participants were tested in four EC tasks and episodic memory and from 140 participants we collected neuroimaging data. Results. Three main results were observed. First, active bilingualism delays the symptoms of cognitive impairment in MCI, independently of education and other CR factors such as leisure activity and job attainment. Second, active bilinguals outperformed passive bilinguals only in tasks of conflict monitoring. Third, active bilinguals with MCI showed more atrophy on the temporal lobe than passive ones, suggesting that they have to suffer a greater amount of cerebral atrophy to have cognitive decline as compared to passive bilinguals. Conclusions. These findings add new evidence that bilingualism acts as a CR factor, also in the preclinical stage of dementia. Specifically, age of L2 acquisition and language use are crucial variables in determining such bilingual advantage. The increased EC efficiency boosted by the active use of the two languages might act as a compensatory mechanism in delaying the cognitive symptoms associated with age-related disorders.

A3 - Learning novel speech sounds reorganizes acoustic representations in the human superior temporal gyrus

Han G. Yi1, Matthew K. Leonard1, Bharath Chandrasekaran2, Kirill V. Nourski3, Matthew A. Howard III3, Edward F. Chang1; 1University of California, San Francisco, 2The University of Texas at Austin, 3The University of Iowa

Speech perception requires listeners to be sensitive to a wide range of acoustic cues for phonetic category, speaker identity, and pitch. Although these cues exist in all languages, they are often used differently, which presents challenges when listening to an unfamiliar language. For example, whereas English uses pitch primarily to signal a variety of prosodic cues, Mandarin Chinese also uses four distinct pitch patterns, called lexical tones, to change word-level meaning. Here, we ask whether learning to identify lexical tones is associated with the emergence of new neural representations, or whether existing pitch representations used for prosody are reorganized to accommodate lexical tone. To answer this question, we directly recorded cortical activity using electrocorticography in humans while they performed a multi-day training task to learn to identify tones from words produced by male and female native Mandarin Chinese speakers. We found neural populations in bilateral mid-anterior superior temporal gyrus (STG) that were highly selective for particular tones, independent of phonetic or speaker information. Crucially, behavioral performance was associated with neural clustering of tones in these populations, such that increased identification accuracy was associated with more distinct neural representations. Finally, we demonstrate that neural representation of Mandarin Chinese tones in STG reflected the same representation of relative pitch that encoded lexical stress in English sentences. Together, these results demonstrate that learning to identify unfamiliar speech sounds enhances pre-existing representations of the relevant acoustic cues, rather than generating novel encoding patterns.

A4 - Neural changes related to successful stutter reduction using transcranial direct current stimulation

Jennifer Chesters1, Riikka Mottonen2, Kate E. Watkins1; 1Department of Experimental Psychology, University Oxford, 2School of Psychology, University of Nottingham

Our recent randomized controlled trial showed that speech disfluency can be reduced by transcranial direct current stimulation (tDCS) paired with a fluency intervention. Anodal tDCS was applied over left inferior frontal cortex for 20 minutes at 1 mA while fluency was temporarily enhanced using metronome-timed speech and choral speech in five daily sessions. Disfluency was reduced one and six weeks after the intervention for the group of people who stutter receiving active tDCS (PWS-A), compared with the group receiving sham stimulation (PWS-S) (Chesters et al, 2018). Here, we investigated the neural changes related to this stutter reduction using functional MRI (fMRI). The fMRI session included three conditions where sentences were read aloud: solo reading, metronome-timed speech, and choral speech. During the baseline, participants saw a sentence in false font and were silent. Imaging data was acquired using sparse-sampling, allowing participants to speak without scanner noise and to hear clearly the metronome and choral speech. All participants were male. Twenty-five PWS with moderate-severe stutter severity were randomly assigned to the PWS-A group (N=13) and PWS-S group (N=12). They were scanned before and one week after the tDCS intervention. Fifteen participants who do not stutter were also scanned. Imaging data were analysed using the general linear model using FSL. To examine for change in activity from pre- to post-intervention, we used a region-of-interest (ROI) analysis. Control participants were fluent during all speaking conditions. PWS stuttered on some sentences during the baseline and post-intervention scans. Stuttered sentences were regressed out, so that the analysis focussed only on fluent speech. Spherical functional ROIs with 6-mm radius were defined from peak co-ordinates of previous studies showing abnormal levels of activity in PWS: in mouth motor cortex, ventral premotor cortex, midbrain, cerebellum and dorsal anterior insula bilaterally, left SMA and Heschl’s gyrus, and right ventral anterior insula. Because the basal ganglia circuitry is implicated in an account of stuttering, we also included anatomical ROIs for the left and right caudate nucleus and putamen. There was activity within each of the ROIs for at least one of the three groups in the whole brain analysis of the pre-intervention scan. We compared the differences in percent signal change in these ROIs from pre- to post-intervention for the PWS-A and PWS-S groups. The PWS-A group showed significant increases in activity across all ROIs, and all speaking conditions, compared with the PWS-S group (main effect of tDCS group F 1,23 = 5.61, p = .027, no significant interaction between group and ROI, or speaking condition). Our results indicate that the stutter reduction following the combined application of anodal tDCS with temporary fluency techniques is associated with increased activity across the speech network. This increase includes regions previously shown to be under-active in PWS during speaking, but also regions where over-activation has been identified.



Slide Session B

Friday, August 17, 1:40 – 3:00 pm, Room 2000C

Chair: Brenda Rapp
Speakers: Mante Nieuwland, Seyedehrezvan Farahibozorg, Laura Gwilliams, Beth Jefferies

B1 - Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials

Mante Nieuwland1,2, Dale J. Barr3, Federica Bartolozzi1,2, Simon Busch-Moreno4, Emily Darley5, David I. Donaldson6, Heather J. Ferguson7, Xiao Fu4, Evelien Heyselaar1,8, Falk Huettig1, E. Matthew Husband9, Aine Ito2,9, Nina Kazanina5, Vita Kogan2, Zdenko Kohút10, Eugenia Kulakova11, Diane Mézière2, Stephen Politzer-Ahles9,12, Guillaume Rousselet3, Shirley-Ann Rueschemeyer10, Katrien Segaert8, Jyrki Tuomainen4, Sarah Von Grebmer Zu Wolfsthurn5; 1Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands, 2School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK, 3Institute of Neuroscience and Psychology, University of Glasgow, UK, 4Division of Psychology and Language Sciences, University College London, UK, 5School of Experimental Psychology, University of Bristol, UK, 6Psychology, Faculty of Natural Sciences, University of Stirling, UK, 7School of Psychology, University of Kent, Canterbury, UK, 8School of Psychology, University of Birmingham, UK, 9Faculty of Linguistics, Philology & Phonetics; University of Oxford, UK, 10Department of Psychology, University of York, UK, 11Institute of Cognitive Neuroscience, University College London, UK, 11Department of Chinese and Bilingual Studies, the Hong Kong Polytechnic University, Kowloon, Hong Kong

Predictable words are easier to process than unpredictable words: ‘bicycle’ is easier to process than ‘elephant’ in “You never forget how to ride a bicycle/an elephant once you’ve learned”. For example, predictable words are read and recognized faster than unpredictable words (e.g., Clifton, Staub & Rayner, 2007). Predictable words also elicit reduced N400 amplitude (Kutas & Hillyard, 1984). However, it remains unclear whether such N400-indexed facilitation is driven by actual prediction (i.e., predictable words are activated before they appear), by integration (i.e., predictable words are semantically more plausible and therefore easier to integrate into sentence context than unpredictable words after they have appeared), or by both. The integration-access debate has long engrossed the psychology and neuroscience of language (for reviews, see Kutas & Federmeier, 2011; Lau, Phillips & Poeppel, 2008; Van Berkum, 2009), but it has yet to reach a conclusion, and there is support for both views. Supporting the access-view, numerous studies show that people can predict the meaning of upcoming words during sentence comprehension, and some studies suggest that N400 amplitude is not a function of sentence plausibility. Supporting the integration view, however, several studies report N400 modulations by semantic or pragmatic plausibility that are not easily explained in terms of prediction alone (e.g., Rueschemeyer, Gardner & Stoner, 2015). The mixed evidence has led some researchers to question the viability of an access-only or integration-only view of the N400, and to propose a hybrid, ‘multiple-process’ account (Baggio & Hagoort, 2011). This account views N400 activity as reflecting cascading access- and integration-processes. Effects of prediction and of integration are therefore both visible in N400 activity, but effects of prediction would precede and be functionally distinct from those of integration. We investigated this issue by exploring modulation of the N400 (Kutas & Hillyard, 1980), an event-related potential (ERP) component commonly considered the brain’s index of semantic processing (Kutas & Federmeier, 2011), using a temporally fine-grained analysis of data from a large-scale (N=334) replication study (Nieuwland et al., 2018, which attempted to replicate DeLong, Urbach & Kutas, 2005). We investigated whether prediction and integration have dissociable effects on N400 amplitude, and how these effects unfold over time. Improving on previously used methods, we simultaneously modelled variance associated with predictability and plausibility, while also controlling for semantic similarity (LSA). We modelled activity at each EEG channel and time-point within an extended time window (e.g., Hauk, Davis, Ford, Pulvermüller, & Marslen-Wilson, 2006), we examined the time-course and spatial distribution of the effect of predictability while appropriately controlling for plausibility and vice versa. We observed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that semantic facilitation of predictable words reflects the effects of either prediction or integration, and suggest that facilitation arises from a cascade of processes that access and integrate word meaning with context into a sentence-level meaning.

B2 - Processor Hub Versus Integrator Hubs: Distinct Roles for Anterior Temporal Lobe and Angular Gyrus in Semantic Processing

Seyedehrezvan Farahibozorg1,2, Richard Henson1, Anna Woollams3, Elisa Cooper1, Gemma Evans4, Yuanyuan Chen1, Karalyn Patterson1, Olaf Hauk1; 1MRC Cognition and Brain Sciences Unit, University of Cambridge, 2Wellcome Centre For Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, 3Neuroscience and Aphasia Research Unit, School of Psychological Sciences, University of Manchester, 4Department of Psychology, University of Chester

Brain imaging research to date has not reached a consensus as to whether distributed semantic networks are organised around one central hub or consist of several heteromodal convergence zones (Lambon-Ralph et al. 2016; Binder 2016; Pulvermüller 2013). In this study, we addressed this question by drawing a distinction between two possible roles of a semantic hub, namely higher-level amodal processing and modality-specific cross-modal integration (Woollams & Patterson 2018), at different stages of visual word processing. Consequently, we hypothesised neural activity inside processor hub(s) and connectivity of integrator hub(s) to heteromodal and unimodal semantic areas to be modulated by semantic variables (i.e. amplitude versus connectivity modulation). In order to test these hypotheses, we utilised the spatio-temporal resolution of source-estimated concurrent Electro-/Magnetoencephalography (EEG/MEG) and: (i) monitored the time course of semantic modulation in a data-driven manner from whole-brain evoked responses in order to identify the processor hub(s); (ii) computed functional connectivity (Coherence) among candidate hub regions (left Anterior Temporal Lobe (ATL), Angular Gyrus (AG), Middle Temporal Gyrus (MTG) and Inferior Frontal Gyrus (IFG)) and the whole-brain in order to identify the integrator hub(s) through differential modulations of connections to the sensory-motor-limbic systems; (iii) compared network models of evoked responses among the candidate hub regions (with visual Word Form Area as the input region) using effective connectivity (Dynamic Causal Modelling (DCM)) in order to identify the integrator hub(s) within the heteromodal subnetwork of semantics. For this purpose, we recruited 17 healthy native English speakers (age 18-40) who performed a concreteness decision task in a visual word recognition paradigm while EEG and MEG (70 and 306 channels) were recorded simultaneously (Elekta Neuromag). Preprocessing included Maxfilter, band-pass filtering (1-48Hz), ICA artefact rejection and epoching. Forward modelling and source estimation were performed on combined EEG/MEG data and based on individual MRIs using boundary element models and L2 minimum norm estimation. Firstly, in order to identify the processor hub(s), source-reconstructed evoked responses were compared between concrete and abstract words using whole-brain cluster-based permutation tests. Left ATL was revealed as a processor hub as early as 100ms peri-stimulus, persisting into later stages of semantic word processing at ~400ms when the effects spread to the bilateral ATLs and anterior IFGs. Secondly, among the tested candidate hubs, whole-brain seed-based connectivity analyses highlighted only ATL and AG as potential integrator hubs through differential modulations of their coherence to the unimodal semantic areas. More specifically, while ATL showed higher connectivity to the right orbitofrontal cortices for the abstract words, AG showed higher connectivity to the somatosensory cortices for the concrete words. Thirdly, the DCM analysis showed that a single hub model provided the highest evidence for connectivity among the left-hemispheric heteromodal subnetwork of semantics, in which ATL acted as the integrator hub during the earliest time window (within 250ms), while AG played this role during later time windows (within 450ms). Therefore, our results suggest that distinct brain systems underlie processing and integration hubness in dynamic semantic networks and that these cortices overlap in the ATL.

B3 - Parsing continuous speech into linguistic representations

Laura Gwilliams1,2, Jean-Rémi King1,3, David Poeppel1,4; 1New York University, 2NYUAD Institute, 3Frankfurt Institute for Advanced Studies, 4Max-Planck-Institute

INTRODUCTION. Language comprises multiple levels of representation, from phonemes (e.g. /b/ /p/) to lexical items (e.g. bear, pear) to syntactic structures (e.g. bears [SUBJECT] eat [VERB] pears [OBJECT]). Here we address two research questions that arise in online processing of naturalistic speech: 1) which representational states are encoded in neural activity; 2) what overarching algorithm orchestrates these representations to ultimately derive meaning? METHODS. Eighteen participants listened to four narratives that were fully annotated - from speech sounds to syntactic structures - such that each level could be correlated with brain activity. Two ~1 hour sessions were recorded from each participant. This naturalistic but controlled setup allowed us to decode, localise and track phonological, lexical and syntactic operations from magnetoencephalography recordings (MEG) using machine learning approaches. RESULTS. First, acoustic-phonetic features (e.g. voicing, manner, place of articulation) could be successfully discriminated from a sequence of neural responses unfolding between ~100 ms to ~400 ms after phoneme onset. Second, part of speech (e.g. verb, noun, adjective), indicative of lexical processing, was decodable between ~150 ms and ~800 ms after word onset. Third, we could decode and track proxies of both syntactic operations (e.g. number of closing nodes) and syntactic states (e.g. depth of tree). Interestingly, some of these syntactic representations were clearly present several hundreds of ms before word onset, whereas others maximally peaked ~300 ms later. CONCLUSION. These sustained and evoked MEG responses suggest that the human brain encodes each level of representation as proposed by linguistic theories. Importantly, the corresponding neural assemblies overlap in space and time, likely facilitating concurrent access across these low-to-high-level representations, in line with a cascade architecture. Put another way, the brain does not discard the representation of a lower-level linguistic property (e.g. phonetic content) once a higher-level feature has been derived (e.g. part of speech). Finally, our study demonstrates how the combination of machine learning and traditional statistics can bridge the gap between spatiotemporally-resolved neuroimaging data and rich but tractable naturalistic stimuli.

B4 - Individual differences in default mode connectivity relate to perceptually-coupled and decoupled modes of semantic retrieval: Functional consequences for comprehension and mind-wandering

Beth Jefferies1, Meichao Zhang1, Nicola Savill2, Daniel Margulies3, Jonathan Smallwood1; 1University of York, UK, 2York St John University, UK, 3CNRS, Institut du cerveau et de la moelle épinière (ICM), Paris

A contemporary puzzle in cognitive neuroscience concerns how regions of the default mode network (DMN) support opposing mental states. This network is activated in tasks tapping comprehension, memory retrieval, imagination and creativity. However, DMN activity is also associated with poor task performance and mind-wandering. A potential solution to this puzzle is offered by the observation that regions linked to heteromodal mental representations, such as ventral and lateral portions of the anterior temporal lobes (ATL), show a pattern of intrinsic connectivity to both DMN and visual cortex. This gives rise to the possibility that ATL supports comprehension at the end of the ventral visual stream, yet also contributes to off-task self-generated thought when this region is perceptually decoupled. We examined the association between these patterns of connectivity and individual differences in comprehension and mind-wandering during reading. Behaviourally, there was a strong negative correlation between these measures: people who mind-wander more often during reading comprehend the text less well. In Study 1, we found people with good comprehension showed stronger activation in middle temporal gyrus (MTG) for meaningless orthographic inputs. Much of this activation fell within the DMN in a region of the temporal lobe associated with heteromodal conceptual processing. In Study 2, we then examined individual differences in the intrinsic connectivity of this DMN region at rest and related these patterns of connectivity to behavioural performance measured outside the scanner. We found that participants reported more frequent mind-wandering during reading when MTG had a pattern of relatively weak connectivity with visual regions. Conversely, better comprehension was associated with greater functional connectivity between MTG and another region of the DMN within the anterior cingulate cortex. These findings show that DMN connectivity is associated with good as well as poor comprehension, and that relatively-low level visual processes contribute to higher-order cognitive states. Our individual differences analysis complements task-based studies of comprehension in the ventral visual stream by showing that activation in heteromodal semantic regions is necessary but not sufficient for good comprehension. In people with strong connectivity between MTG and visual cortex, semantic cognition tends to be perceptually-coupled, while perceptually-decoupled semantic retrieval is associated with poor comprehension. Thus, the opposing roles of DMN in different mental states may reflect connectivity to task-relevant or irrelevant information.



Slide Session C

Saturday, August 18, 11:00 am – 12:20 pm, Room 2000C

Chair: Marina Bedny
Speakers: Florence Bouhali, Emilie McKinnon, Katherine Travis, Travis White-Schwoch

C1 - Distinct areas for the processing of graphemes and words in the left occipitotemporal cortex

Florence Bouhali1,2,3,4,5, Zoé Bézagu1,2,3,4, Stanislas Dehaene6,7, Laurent Cohen1,2,3,4,8; 1Inserm, U 1127, F-75013, Paris, France, 2CNRS, UMR 7225, F-75013, Paris, France, 3Sorbonne Universités, UPMC Univ Paris 06, UMR S 1127, F-75013, Paris, France., 4Institut du Cerveau et de la Moelle épinière, ICM, F-75013, Paris, France., 5University of California San Francisco (UCSF), San Francisco, CA 94143, 6Cognitive Neuroimaging Unit, CEA DRF/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France, 7Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France, 8AP-HP, Hôpital de la Pitié Salpêtrière, Féderation de Neurologie, F-75013, Paris, France

Word reading in alphabetical scripts can be achieved either by print-to-sound mapping, or by direct lexical access. While the lexico-semantic route can operate on coarse orthographic units like non-contiguous subsets of letters, phonological decoding is based on the exact identification of graphemes and of their order to support their conversion into phonemes. At the neural level, the visual word form area (VWFA), within the left occipito-temporal cortex, has been extensively implicated in the extraction of orthographic information. Yet, it is unclear how the VWFA extracts information for access to both phonology and to the lexicon, as these two orthographic codes strongly differ in nature. In order to identify regions potentially implicated in grapheme encoding for grapheme-to-phoneme mapping, the current study manipulated the perception of multi-letter graphemes (e.g., AI in chair), crucial for phonological decoding but not for lexical access. Twenty adults performed both lexical decision and naming tasks in French in the MRI, on words and pseudo-words containing a high proportion of multi-letter graphemes. The perception of multi-letter graphemes was encouraged or disrupted visually using both font color alternation and spacing, either located between graphemes (ch-ai-r), or in the middle of graphemes (c-ha-ir). Behaviorally, we observed that the perceptual disruption of multi-letter graphemes impaired reading, especially when naming pseudo-words – the condition that relied most on the phonological reading route. Within the left occipito-temporal cortex, the manipulation of graphemes affected the activity of a region near the mid-fusiform sulcus in opposite directions, depending on whether stimulus and task demands required either a lexical or a phonological strategy. A separate contrast revealed that this same region was also overall more activated in participants whose response times were most affected by the manipulation of graphemes. Interestingly, this “grapheme region” was more medial than the typical VWFA identified separately, and significantly differed from the VWFA on several aspects. The medial “grapheme region” was more sensitive to string length, and its connectivity during task to the intra-parietal sulcus, known for its integral role in letter-by-letter reading, was modulated by phonological demands. In contrast, the VWFA showed large effects of word frequency and of lexicality. Our results hence suggest a partial dissociation within the left ventral stream of orthographic regions implicated in orthographic processing: medial fusiform regions would encode orthographic information at a fine-grained graphemic level (sublexical processing), while the typical VWFA – located laterally within the occipito-temporal sulcus – may participate more in direct lexical access. This view is compatible with reports of more medial activations for reading in children – who rely more on phonological decoding than adults, or in Braille readers – who are constrained by the serial nature of the sensory input. These two regions would collaborate in particular in conditions requiring phonological processing, as suggested by our connectivity analyses, with the VWFA as a primary output of the visual system towards the reading network.

C2 - Synergism between cortical damage and white matter disconnection contributes to aphasia severity

Emilie McKinnon1, Barbara Marebwa1, Chris Rorden2, Alexandra Basilakos2, Ezequiel Gleichgerrcht1, Julius Fridriksson2, Leonardo Bonilha1; 1Medical University of South Carolina, 2University of South Carolina

Language impairments are common after a dominant hemisphere stroke, although the relative contribution of damage to cortical areas and white matter pathways to aphasia severity remains poorly understood. In this study, we assessed if our understanding of aphasia severity and linguistic skills could be improved by quantifying damage to both gray and white matter areas often implicated in language. Specifically, we hypothesized that cortical disconnection aids in the explanation of critical differences in language function particularly when cortical areas are largely intact. We recruited 90 right handed participants (age 58.8 ± 12.1 years, 34 females; 42.8 ± 50 months post stroke) with a single left hemisphere stroke that underwent MRI imaging (T1-w, T2-w and DTI (b=0, 1000 s/mm2) and the Western Aphasia Battery-Revised (mean 63 ± 28). In addition, we scanned 60 older self-reported cognitively normal participants (47 females, age 55.1 ± 8.6 years). T1-weighted images were segmented into probabilistic gray and white matter maps using either SPM12’s unified segmentation-normalization or enantiomorphic normalization. The probabilistic gray matter map was divided into JHU anatomical regions, and white and gray matter parcellation maps were registered into diffusion imaging space where pairwise probabilistic DTI fiber tracking was computed. Weighted connectomes were constructed based on the number of streamlines corrected by distance traveled and by the total gray matter volume. Lesions were drawn on T2-weighted images and proportional damage to ROIs was determined by the intersection of lesion drawings and JHU ROIs. ROIs were considered disconnected when the number of connections was less than 2 standard deviations away from the mean number of connections in the non-brain damaged cohort. Our results focused on a language specific subnetwork consisting of Broca’s area, supramarginal gyrus (SG), angular gyrus (AG), superior temporal gyrus (STG), middle temporal gyrus (MTG), inferior temporal gyrus (ITG) and the posterior parts of the STG (pSTG) and MTG (pMTG). Disconnection within this subnetwork significantly aided in the explanation of aphasia severity (WAB-AQ) when cortical areas suffered between 21 – 91 % damage. Outside of this range, disconnection did not significantly help explain the variability in aphasia quotient. In an additional ROI-based analyses, damage to the left superior longitudinal fasciculus explained an extra 31% of variance (r=-0.56, p<0.05) in WAB-fluency scores in addition to 29% variance explained by damage to Broca’s area alone. Likewise, individual auditory comprehension scores were explained by the quantification of damage to the inferior longitudinal fasciculus (r=-0.23, p<0.05) in addition to quantified damage to Wernicke’s area. In conclusion, quantifying damage to white matter pathways can help explain individual language impairments in subjects with chronic aphasia. Our results suggest that this benefit is largest in areas with average cortical damage.

C3 - More than Myelin: Interrogating white matter tissue properties underlying receptive and expressive language abilities in 8 year old children

Katherine Travis1, Lisa Brucket1, Aviv A. Mezer2, Michal Ben-Shachar3, Heidi M. Feldman1; 1Stanford University, 2The Hebrew University of Jerusalem, 3Bar Ilan University

Background: Language abilities in children and adults rely on multiple white matter tracts, including both dorsal and ventral pathways. Prior studies using diffusion MRI (dMRI) have shown that structural properties of these pathways vary in association with age-related changes in development and aging, and with individual differences in receptive and expressive language abilities in both adults and children. In children, it is often assumed that structure-function associations are driven by on-going myelination. However, dMRI metrics, such as fractional anisotropy (FA), are sensitive to multiple tissue properties including myelin content and axonal properties, specifically crossing fibers, axonal diameter and axonal density. Clarifying the contributions of myelin to individual variations in language abilities in children requires additional MRI techniques with increased sensitivity for myelin content, such as quantitative T1 MRI (qT1). R1 from qT1 is a measure of the longitudinal relaxation rate of water and is directly associated with myelin content (R1 = 1/T1). Here, we combined metrics from dMRI (FA) with qT1 (R1) to interrogate the contributions of myelin content and axonal properties to individual variations in children’s receptive and expressive language abilities. Methods: We obtained 30 direction dMRI (b = 1,000 s/mm) and qT1 data in 8-year old children (N=24). Children further underwent behavioral testing of expressive and receptive language using the Clinical Evaluation of Language Fundamentals (CELF4). qT1 scans were acquired using a spoiled gradient echo sequence (flip angles 4°, 10°, 20°, 30°) corrected for inhomogeneity with spin-echo, inversion-recovery sequence, with multiple inversion times (TI = 400, 1200, 2400ms). Whole-brain deterministic tractography and automated tract quantification were used to segment dorsal and ventral language-related pathways. We quantified fractional anisotropy (FA) and R1 (R1 = 1/T1) values along the trajectory of each tract. Associations between FA or R1 and standard scores for receptive and expressive language skills were examined using Pearson correlations. Results: In the right inferior fronto-occipital fasciculus both FA and R1 were significantly and positively associated with language scores. In the left and right inferior longitudinal fasciculus, only R1 was significantly correlated with language scores. In the left inferior fronto-occipital fasciculus and right uncinate fasciculus, only FA was significantly correlated with language scores. No significant correlations were detected between language scores and either white matter metric in dorsal stream tracts. Conclusions: The current evidence suggests that multiple tissue properties, including both myelin content and axonal properties, account for individual variations in children’s language abilities at school age. The present findings also demonstrate that R1 may be sensitive to structure-function associations not otherwise captured by FA alone. Future analyses will focus on clarifying which specific linguistic processes (e.g., lexico-semantic) broadly captured by the current language measure account for patterns of associations within ventral but not dorsal pathways. Overall, the present study emphasizes the importance of combining MRI techniques to advance understandings of how white matter properties contribute to language abilities in children. Specifically, we demonstrate that the neurobiological underpinnings of language abilities in children are not limited to myelin.

C4 - Pre-school auditory processing predicts school-age reading achievement: A 4-year longitudinal study

Travis White-Schwoch1, Elaine C. Thompson1, Silvia Bonacina1, Jennifer Krizman1, Trent Nicol1, Ann R. Bradlow1, Steven G. Zecker1, Nina Kraus1; 1Northwestern University

Several prominent theories of developmental dyslexia propose that poor auditory processing is a risk factor for reading impairment. A crucial prediction of this hypothesis is that auditory processing is faulty before children begin learning to read. The frequency-following response (FFR) is a scalp-recorded electrophysiological response that relies on synchronous neural firing along the auditory pathway. The FFR is not dependent on any attention or task compliance, and is appropriate across the lifespan, making it an excellent approach for longitudinal studies in children. We previously reported that, in preschoolers (ages 3-4 years), frequency-following responses (FFRs) to consonants in noise strongly predicted early literacy skills such as phonological awareness and rapid automatized naming, and also their performance on early literacy tests one year later (White-Schwoch et al., 2015, PLOS Biol). However, given how young those children were at the time, we could not evaluate their reading achievement. Here we report a longitudinal follow-up to that project, where we followed the same children for an additional 4 years, evaluating their reading performance at school age. We show that FFRs to consonants in noise measured in preschoolers (ages 3-4) predict their reading skills 4 years later (ages 7-8), including silent and oral reading fluency, phonological processing, and rapid naming. These results show that individual differences in reading achievement in school-aged children can be predicted from a pre-literacy index of auditory processing (the FFR). These results support auditory processing models of reading development and specifically identify poor auditory processing in noise as a potential source of reading impairment. FFRs in preschoolers may provide a clinical tool to identify children at risk for future reading impairment. Supported by NIH (DC01510) and the Knowles Hearing Center.