Slide Sessions

Session #

Session Title

Time & Location

Session A

Reading and Comprehension

Thursday, August 18, 5:10 – 6:30 pm
Logan Hall

Session B

Speech Perception and Prediction

Friday, August 19, 4:00 – 5:20 pm
Logan Hall

Session C

Language Disorders and Therapy

Saturday, August 20, 12:10 – 1:30 pm
Logan Hall

Slide Session A

Thursday, August 18, 5:10 – 6:30 pm, Logan Hall

Reading and Comprehension

Chair: Liina Pylkkänen
Speakers: Matthew Nelson, Caroline Beelen, Shruti Dave, Ina Bornkessel-Schlesewsky

Neurophysiological dynamics of phrase structure building during sentence reading

Matthew Nelson1,2, Imen El Karoui3,4,5,6, Kristof Giber7, Laurent Cohen3,4,5,6,8, Sydney Cash7, Josef Parvizi9, Lionel Naccache3,4,5,6,8, John Hale10, Christophe Pallier1,2,11, Stanislas Dehaene1,2,11,12; 1Institut National de la Santé et de la Recherche Médicale (INSERM) U992, 2NeuroSpin Research Center, 3Institut National de la Santé et de la Recherche Médicale (INSERM) U1127, 4Centre National de la Recherche Scientifique (CNRS) UMR7225, 5Université Paris 6, 6Institut du Cerveau et de la Moelle Épinière Research Center (ICM), 7Massachusetts General Hospital, 8AP-HP Groupe hospitalier Pitié-Salpêtrière, 9Stanford University, 10Cornell University, 11Université Paris 11, 11Collège de France

Although sentences unfold one word at a time, most linguistic theories agree that the proper description of language structures is not a linear sequence of words, but a tree structure of nested phrases. Yet this description remains a theoretical construct with neurophysiological underpinnings that have never been directly observed. Here we demonstrate intracranial neurophysiological evidence for the construction of such structures in the left hemisphere language network. Epileptic patients implanted with electrodes volunteered to perform a meaning-matching task of sentence pairs presented word-by-word using Rapid Serial Visual Presentation. Sentences of 3 to 10 words in length were automatically generated with a range of varied syntactic structures across sentences. Patients were asked to compare the meaning of the sentence to a second sentence presented after a 2-second delay. We analyzed the time-dependent broadband high gamma power (70 to 150 Hz), which is considered to be a reliable marker of the overall activation rate of the local neuronal population near the recording site. For electrodes located in key parts of the left hemisphere language network, particularly the left temporal pole (TP) and anterior superior temporal sulcus (aSTS), broadband high gamma power gradually builds-up with each successive word in the sentence. This built-up activity then collapses following moments when these words can be unified into a constituent phrase, and then builds-up again with the presentation of additional words in the next phrase of the sentence. This activity corresponds to the number of open nodes of a syntactic tree that could be built to represent the sentence, which we propose is reflected in this activity. Simultaneously, when the constituent unification events occur, electrodes in the left inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) show a unique transient activation whose magnitude increases with the number of nodes being closed, and may reflect constituent structure building operations associated with the closing of the open nodes in the tree structure. We compare the data to precise computational linguistic models and find that the data most closely matches the operations of models of bottom-up parsing that can be implemented using push-down automata to parse incoming language into syntactic trees. We suggest that neural activity in the left-hemisphere language network reflects the sequence of steps leading to the formation of phrasal structures in this manner.

Pre-reading neuroanatomical anomalies related to developmental dyslexia

Caroline Beelen1, Jolijn Vanderauwera1,2, Maaike Vandermosten1,2, Jan Wouters2, Pol Ghesquière1; 1Parenting and Special Education, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium, 2Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium

Developmental dyslexia is a learning disability characterized by persistent reading and/or spelling impairments despite adequate intellectual and sensory abilities. This learning disability is considered a multifactorial deficit, caused by a complex interplay of genes, environment and cognitive abilities, expressed at the neural level. In the present study we investigated whether anatomical differences in specific reading-related regions are already present prior to reading and spelling instruction onset in children who develop dyslexia later on. For this purpose, we investigated cortical thickness and surface area sizes of specific brain regions derived from meta-analyses of Richlan et al. (Human Brain Mapping, 2009; Neuroimage, 2011), i.e. the fusiform, the left inferior, middle and superior temporal cortex, the left inferior parietal lobule (IPL) and the pars opercularis of the left inferior frontal gyrus (IFG). Since past research suggested that individuals with dyslexia rely more strongly on both hemispheres during reading-related tasks, we focused on their right hemispherical counterparts as well. The sample for the present study consisted of 55 pre-readers (mean age 6 years, 2 months) of which 31 children had a familial risk (FRD+) for developing dyslexia, defined by having a first-degree relative with dyslexia, and 24 children had no familial risk (FRD−). All children underwent an anatomical scan (T1) in a 3T-scanner at the end of kindergarten. The participants were retrospectively classified as typical readers (TR, n=41) and dyslexic readers (DR, n=14) based on reading and spelling tests at grade 2 and grade 3. Children scoring below the 10th percentile on a reading test at two time points, or children scoring below the 16th percentile on a reading test at two time points and below the 10th percentile on a spelling task at two time points, were considered dyslectic. In a first step, we compared dyslectic readers (n=14) with FRD− typical readers (n=24). Consecutively, we investigated whether differences in surface area and thickness sizes were either driven by dyslexia per se or by a familial risk for dyslexia. For this purpose, we compared all children developing dyslexia with all children developing typical reading skills, regardless of their familial risk (DR vs. TR), and we compared all children with and without a familial risk (FRD+ vs. FRD−). All structural T1 analyses were run using FreeSurfer software (, implementing the Desikan-Killiany atlas. The results revealed that surface area of the group of children who develop dyslexia later on differed from FRD− typical readers in the left fusiform, right pars opercularis and bilateral inferior temporal surface area. The DR children had a smaller surface area in these regions, which seemed to be driven by dyslexia per se in the left fusiform region and both by dyslexia and familial risk in the right inferior temporal region. Differences in cortical thickness were absent. To conclude, already at pre-reading age surface area anomalies in left and right hemispherical reading-related regions are present in children that develop dyslexia later on.

Age-Related Increase in Neural Noise Influences Predictive Processing during Reading

Shruti Dave1,2, Trevor Brothers1,2, Tamara Y Swaab1,2; 1Center for Mind and Brain, Davis, CA, 2University of California, Davis

The neural noise hypothesis (Crossman & Szafran, 1956; Cremer & Zeef, 1987) posits that age-related reductions in effective signal-to-noise in the brain can provide an explanation for cognitive slowing in normal aging. Recent studies (Voytek & Knight, 2015; Voytek et al., 2015) have attributed age-related increases in neural noise to desynchronized neuronal population spiking, such that age differences have been found in the shape of EEG power spectra as measured from the scalp. The goal of the present study was to determine if EEG spectra differences provide a reliable individual difference measure across tasks, and if age-related increases in neural noise also play a functional role in predictive language processing. In two experiments, EEG/ERP was recorded from two groups of younger adults (Experiment 1; n=24, mean=19.5, range=18-28; Experiment 2: n=24, mean=20.5, range=18-33) and one group of elderly participants (n=23, mean=72.0; range=64-79). In the first experiment, participants read single high cloze and low cloze sentences for comprehension. In the second experiment, participants read moderately constraining two-sentence passages, and were asked to actively predict the final word of each. Measures of neural noise were assessed in both experiments by calculating EEG spectra for each reader using a fast Fourier transform. The slope of the logged power spectral density (1/f neural noise) was calculated across a logged frequency range of 2-40Hz, excluding visual cortical alpha power (7.5-12.5Hz) (similar to procedures established in Voytek et al., 2015). Compared to younger adults, elderly readers showed reductions in low-frequency power and increases in the high-frequency range. 1/f slopes differed across groups for each task (t=4.77, p<.001 for comprehension task; t=3.53, p<.001 for prediction task), and neural noise showed a strong correlation with age (R=.60, p<.001). These findings replicate the results of Voytek et al., 2015 in showing increased neural noise for older in comparison to younger adults. In ERP analysis of the prediction task, significantly reduced N400 amplitudes were observed for correctly predicted passage-final words (Brothers et al., 2015, Dave et al., 2015), but these N400 effects were typically delayed and reduced in amplitude for elderly readers. Across young and older participants, neural noise was a significant predictor of the amplitude (R=.35,p=.02) and latency (R=.42,p=.004) of the N400 prediction effect, such that higher noise was associated with slower, smaller N400 benefits from prediction accuracy. Therefore, we found that age-related differences in neural noise impact predictive processing during language comprehension, contributing to reduced and delayed N400 prediction effects in older readers. When assessing how consistent neural noise is in adults, irrespective of task, we found that 1/f noise slope was highly reliable for elderly readers across both experiments (R=.82, p<.001), suggesting that neural noise is a consistent individual performance measure. 1/f neural noise may, as a result, provide a powerful tool to investigate cognitive declines among healthy aging adults.

Social categorisation affects sentence comprehension: the role of the angular gyrus as a multimodal hub for contextualised event interpretation

Ina Bornkessel-Schlesewsky1, Sabine Frenzel2, Arne Nagels2, Alexander Droege2, Jens Sommer2, Richard Wiese2, Tilo Kircher2, Matthias Schlesewsky1; 1University of South Australia, 2University of Marburg

Social categorisation, in which fellow humans are classified into a socially relevant in-group (IG) or a socially disregarded out-group (OG), is an integral part of human social interaction (e.g. Tajfel, 1978). It not only structures our world in terms of social relations, but is also a driver for discriminatory behaviour (i.e. IG favouritism or OG discrimination). IG bias is thought to result from a strong association between the IG and the self (e.g. Tajfel & Turner, 1979). In many languages of the world, the IG bias is reflected in contrasts (e.g. in morphological marking) between 1st (self) and 3rd person referents. During language comprehension, the self appears to serve as a model for the actor of a sentence (i.e. the person or thing primarily responsible for the state of affairs described), possibly via embodied simulation (e.g. Bornkessel-Schlesewsky & Schlesewsky, 2013). Here, we tested the hypothesis that social categorisation modulates the neural correlates of language comprehension, and specifically of causal attributions during sentence comprehension. In an auditory fMRI study, we manipulated the IG or OG status of the speaker as well as competition for the actor role. Sentences were uttered from a 1st person perspective (i.e. the speaker was the actor) and the undergoer argument was either a second person pronoun (very high actor potential), a noun phrase (NP) referring to a human (high actor potential) or an NP referring to an inanimate NP (low actor potential). (Example, translated from German: I praise you/the scientist/the essay; high/medium/low actor competition) Thirty-two native speakers of German (15 female; mean age: 24.7) listened to sentences of this type. Prior to the fMRI scanning, participants filled in a fictitious personality questionnaire and received random feedback about their problem solving strategy (classifying them as either a sequential or a conclusive problem solver). Participants were told that the speakers uttering the sentences were either sequential or conclusive problem solvers, thus classifying them as either IG or OG members according to a minimal group paradigm (e.g. Tajfel et al., 1971). In a small proportion of trials, participants judged either the most likely cause of the state of affairs just described (the speaker or another person/situation) or the problem-solver strategy of the speaker they had just heard. We observed an interaction between speaker group and actor competition in the right angular gyrus (AG), with increased competition leading to an activation increase for an OG speaker and to an activation decrease for an IG speaker. This result cannot be explained via length differences between the 2nd person pronoun and the two full NPs, as the activation pattern reversed according to the social status of the speaker. Our findings support the perspective that the AG serves as a "cross-modal integrative hub" for event interpretation within a broader context and in support of intended action (Seghier, 2013). The novel contribution of the present study lies in the demonstration that this contextualised, cross-modal interpretation appears to be influenced by the hearer's social categorisation of the speaker.

Slide Session B

Friday, August 19, 4:00 – 5:20 pm, Logan Hall

Speech Perception and Prediction

Chair: Ina Bornkessel-Schlesewsky
Speakers: Jona Sassenhagen, Dave F. Kleinschmidt, Helen Blank, Matthias J. Sjerps

Multilevel modeling of naturalistic language processing: An application to cross-level predictive processes

Jona Sassenhagen1, Christian J. Fiebach1; 1University of Frankfurt

For speech, a multilevel message has to be encoded into a one-dimensional, interference-prone, narrow-band channel. As a consequence, listeners have to instantaneously reconstruct multilevel structures from impoverished input. Predictive Coding frameworks suggest this process is made possible by relying on contextual predictability, highlighting the importance of investigating prediction error signals. Prediction error responses have been observed in multiple experiments, but we develop a framework for studying the interaction of surprisal and contextual support (1) in coherent natural (as opposed to artificially constructed) language, and (2) simultaneously across multiple linguistic levels (rather than by experimental isolation of specific levels of linguistic representation). Massive, time-resolved regression analysis of neural time series (Smith & Kutas, 2015) was combined with state-of-the-art tools from computational linguistics that quantitatively describe the speech signal at multiple levels, allowing for model-based, multilevel analysis of naturalistic language processing. For multiple experiments (EEG and MEG; n > 75), we algorithmically generated multi-level (i.e., combined phonetic, semantic, syntactic ...) descriptions of natural linguistic stimuli (homogenous sentences, popular audio books). For example, semantic-level predictors included word frequency, vector estimations of word meanings, and information-theoretic measures (such as entropy); syntactic predictors included, e.g., distance to the syntactic head. To compute neural responses to each of these representational levels, we jointly analyzed full data sets (in contrast to an analysis of specific epochs). The time-resolved, multilevel predictor matrices and the full neural time series are treated as linear systems and solved for the least-squares coefficients, corresponding to the independent change of the neural responses per unit of each predictor over time. These coefficient vectors also constitute encoder models that predict neural responses, allowing a statistical test if a feature significantly contributes to the prediction of the whole brain signal. Using this approach, we observe statistically significant signatures of predictive coding across levels of linguistic representation. More specifically, on each representational level, primary primary effects are increasingly attenuated by increasing amounts of (top-down, predictive) constraint from the sentence context. For example, we observe that early word onset-locked temporal cortex activity (equivalent to the N100) is more attenuated later compared to earlier in a text, when listeners become attuned to the rhythm of speech. Also, higher-level surprisal effects, e.g., an N400-like response to unconditional word probability in coherent sentences, are attenuated for contextually predictable words. We also observed cross-level interactions, e.g., between semantic predictability effects and low-level acoustic effects, corresponding to an attenuation of low-level surprise if individual words are better predictable. Using cross-validated prediction of the EEG signal, we observe that a multilevel model predicts neural activity significantly (r > .11, p < 0.01), and better than simpler single-level models. In conclusion, our data demonstrate the feasibility of accounting for the richness of naturalistic language in electrophysiological investigations. Our method can be employed as a plug-and-play test for a wide range of linguistic theories, to the degree to which they are well specified. It highlights the interaction of simultaneous predictive processes across multiple levels of naturalistic language.

Neural mechanisms for coping with talker variability by rapid distributional learning

Dave F. Kleinschmidt1, T. Florian Jaeger1, Rajeev Raizada1; 1University of Rochester

Talker variability presents a substantial problem for speech perception: successful and efficient comprehension of each individual talker's speech requires a different mapping between acoustic cues and phonetic categories. Listeners rapidly recalibrate to an unfamiliar talker's accent, adjusting their category boundaries based on talker-specific distributions of acoustic cues (e.g., Norris, McQueen, and Cutler 2003; Kleinschmidt and Jaeger 2015). The neural mechanisms of this rapid learning, however, are not clearly understood. Previous work has focused on the mechanisms on how audiovisual or lexical labels are integrated with ambiguous cues to drive learning, but these labels are not in general available to listeners. We take a different approach, probing the mechanisms by which purely distributional information changes phonetic classification of a /b/-/p/ continuum (cued by voice-onset time, VOT). In our experiment, 20 listeners (plus 2 excluded for not categorizing reliably) performed a distributional learning task while being scanned with fMRI. On each of 222 trials, listeners classified b/p minimal pair words, indicating by button press whether they heard, e.g., "beach" or "peach". Each word's VOT was randomly drawn from a bimodal distribution, which implied a particular category boundary. Listeners heard either a low- or high-VOT "accent", which differed only in terms of the distribution of VOTs. Critically, for both accents, 0ms VOT is a /b/, and 40ms is a /p/. But for the low-VOT accent, 20ms clusters with the higher-VOT /p/ distribution, and for high-VOT accent, 20ms clusters with the /b/s. All three of these stimuli occur equally frequently within and across accents. Behaviorally, listeners did learn the different accents, classifying the 20ms VOT stimulus more often as /b/ in the high VOT accent condition than the low. Brain images were collected with fMRI using clustered volume acquisition (2s TA, 3s TR) with a 3, 6, and 9s jittered TOA. We performed a searchlight similarity analysis of these images to determine where talker-specific VOT-category mappings could be decoded. We extracted activity patterns for each unique VOT via GLM. Within 3-voxel (6mm) radius searchlights, we calculated the similarity of the 20ms VOT stimuli to 0ms and 40ms based on the correlation of the searchlight pattern pairs (across runs to mitigate collinearity). We tested whether the (listener-specific) within-category pair was more similar than the between category pair for each searchlight, and randomly permuted accent condition labels to get the null distribution of largest cluster extent (t>4) and maximum t value. The largest cluster was in the left inferior parietal lobule (IPL; peak: -40, -52, 20), with 53 voxels (p < 0.05), and contained the largest t = 8.7 (p < 0.001). Previous work has shown the IPL plays a role in adjusting phonetic categories based on audiovisual (Kilian-Hütten, Vroomen, and Formisano 2011) and lexical labels (Myers and Mesite 2014). We find that the IPL represents rapidly learned talker-specific phonetic category structure, even when that information must be extracted from the distribution of acoustic cues alone, suggesting a more general role in flexibly recognizing speech in the face of talker variability via distributional learning.

Predictive coding but not neural sharpening simulates multivoxel fMRI response patterns during speech perception

Helen Blank1, Matthew H Davis1; 1MRC Cognition and Brain Sciences Unit, Cambridge, UK

Successful speech perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. Two different functional mechanisms have been proposed for how expectations influence processing of speech signals. Traditional models, such as TRACE, suggest that expected features of the speech input are sharpened via interactive activation. Conversely, Predictive Coding suggests that expected features are suppressed such that unexpected features of the speech input are processed further. The present work aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. We analysed sparse imaging fMRI data from 21 healthy participants. To investigate the effect of prior knowledge on speech perception, participants read neutral (“XXX”) or matching written words before hearing degraded spoken words (noise-vocoded at 4- and 12-channels). In catch trials, participants said aloud the previous written or spoken word. By combining behavioural, univariate and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception, we tested computational models that implemented Predictive Coding and Sharpening mechanisms. To do this, we focused on the representation of speech in the left posterior superior temporal sulcus (pSTS) since previous studies showed that prior knowledge influences the magnitude of activity in this brain region during speech perception. Behavioural results showed that both increased sensory detail and informative expectations improve the accuracy of word report for degraded speech. Univariate fMRI analysis revealed a main effect of matching vs. neutral prior knowledge on BOLD response magnitude in the left pSTS (p<0.05 FWE voxel correction). Mean beta values extracted from this region showed a reduction during match in contrast to neutral conditions. Our computational simulations of Sharpening and Predictive Coding during speech perception could both explain these behavioural and univariate fMRI observations. However, multivariate fMRI analyses revealed that sensory detail and prior expectations have interacting effects on speech representations: Increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns only when prior knowledge was absent, but with informative expectations the amount of measured information during presentation of clearer speech was reduced. This interaction revealed by multivariate fMRI observations was uniquely modelled by Predictive Coding and not by Sharpening simulations. In summary, the present results show that both increased sensory detail and matching prior expectations improved accuracy of word report for degraded speech but crucially had interacting effects on speech representations in the pSTS. The combination of multivariate fMRI analysis with computational modelling was methodologically critical to discriminate Predictive Coding from Sharpening mechanisms and provides a unique contribution to understand the observed interaction of sensory detail and matching prior expectations. Our findings support the view that the pSTS does not represent the expected, and therefore redundant, part of the sensory input during speech perception, in line with Predictive Coding theories.

Hierarchical, acoustically-grounded, distinctive features are the dominant representations of perceived speech

Matthias J. Sjerps1,2, Matthew K. Leonard3, Liberty S. Hamilton3, Keith Johnson2, Edward F. Chang3; 1Radboud University, Nijmegen, the Netherlands, 2University of California, Berkeley, United States, 3University of California, San Francisco, United States

INTRODUCTION: Previous large-scale behavioral studies of speech sounds presented in noise have shown that some speech sounds are more likely to be confused than others, and these confusions are related to distinctive features in those speech sounds: Consonant-Vowel syllables (CVs) that differ only in place of articulation (e.g., /pa/ vs. /ka/) are more likely to be confused with each other compared to sounds that differ only in frication (e.g., /ta/ vs. /za/), which are in turn more likely to be confused than those that differ only in voicing (e.g. /sa/ vs. /za/). This result suggests that these features are, perceptually, represented in a hierarchical relation (where Voicing is more robust to distortion than Frication, which is in turn more robust than Place). In addition, recent work from our lab has indicated that acoustically-grounded distinctive features are the dominant form of representation in secondary auditory cortex during (passive) listening under clear conditions. Others, however, have suggested that gesturally-grounded distinctive features encoded in production-related cortical regions are involved in speech perception as well, especially under challenging listening conditions. Here we addressed two questions: (1) Is the feature-hierarchy that has previously been reported in perception reflected in neural encoding of speech? And, (2) Does activity observed in motor regions, especially when speech is presented in noise, provide additional discriminative information? METHODS: Three epilepsy patients, implanted with subdural high-density electrocorticography (ECoG) grids over the left hemisphere for clinical purposes, participated in the study. Coverage included the superior temporal gyrus (STG), middle temporal gyrus (MTG), inferior frontal gyrus (IFG), and sensorimotor cortex (SMC). While neural activity was recorded from these areas, participants listened to 12 CV sounds (e.g., /pa/, /ta/, /sa/ etc.) and indicated what sound they perceived. Stimuli were presented in clear conditions (without noise), or embedded in white noise with a signal-to-noise ratio of +6dB or 0dB. RESULTS & DISCUSSION: We examined distances between in neural encoding of these CVs with Multi-Dimensional Scaling to investigate whether the perceptual hierarchy mentioned above was also reflected in auditory responsive cortical regions. We found that CVs with the same distinctive features show similar spatiotemporal patterns of activity across speech responsive regions in peri-Sylvian cortex, and that these representations indeed adhere to the same hierarchy (Voicing>Frication>Place). We also compared the discriminability of different CVs in cortical regions typically associated with speech perception (STG & MTG) with regions typically associated with speech production (IFG & SMC). We observed that whereas perception regions discriminated between CVs, production regions did not allow for meaningful discrimination, regardless of the SNR level. This observation argues against a direct role for motor-regions in discriminating between speech sounds, whether in clear or more challenging listening conditions. Together, the results show that hierarchically organized distinctive features, grounded in acoustic similarities, are the dominant representation of speech sounds.

Slide Session C

Saturday, August 20, 12:10 – 1:30 pm, Logan Hall

Language Disorders and Therapy

Chair: David Corina
Speakers: Idan Blank, Diego L. Lorca-Puls, Magdalena Sliwinska, Kyrana Tsapkini

Functional reorganization of the brain networks that support language processing following brain damage in aphasia

Idan Blank1, Sofia Vallila-Rohter2, Swathi Kiran2, Evelina Fedorenko3; 1MIT, 2Boston University, 3Massachusetts General Hospital

Studies of functional brain reorganization in people with aphasia (PWAs) focus on fronto-temporal language regions (Binder et-al., 1997) because they respond selectively to language in healthy adults (Fedorenko et-al, 2011), and damage to them compromises linguistic processing (Geschwind, 1970; Bates et-al, 2003). The prevailing hypothesis is that language processing in PWAs relies on spared, ipsi-lesional left-hemispheric language regions (Cao et-al., 1999; Fridriksson, 2010) or contra-lesional, right-hemispheric homologues (Dressel et al., 2010; Vitali et al., 2007). However, in healthy adults, language processing additionally recruits fronto-parietal multiple-demand (MD) regions (Duncan, 2010). MD regions constitute a distinct functional network: they are domain-general, responding across diverse demanding tasks (Fedorenko et-al., 2013), and their activity is not synchronized with activity in language regions during naturalistic cognition (Blank et-al., 2014). Importantly, however, language processing, especially when effortful, does engage the MD network (e.g., reading nonwords, ambiguous or syntactically complex constructions) (Fedorenko, 2014). We therefore hypothesize that MD regions increase their involvement in linguistic processing in PWAs, “coming to the language network’s rescue”. Indeed, some neuroimaging studies of PWAs report activations outside the language network during language processing (e.g., Brownsett et-al., 2014; Geranmayeh et-al., 2014; Meier et-al., 2016). However, interpreting these effects as MD activations is challenging because studies rely on group-based analyses that are not sensitive to inter-individual differences in the locations of MD and language regions, which lie side-by-side in the frontal cortex (Fedorenko et-al., 2012). Consequently, here we defined language and MD regions functionally, in each individual PWA, and characterized them. Eight PWAs (50-72yo; aphasia quotient: 34-97%) were scanned in fMRI at least 6 months following a left-hemispheric stroke. Sixty young participants (18-30yo) served as controls. Six bilateral language regions of interest (ROIs) were defined in each participant (Nieto-Castañón & Fedorenko, 2012) via reading of sentences vs. nonwords (speed-adjusted for PWAs) (this reliable localizer generalizes across tasks and input modalities; Fedorenko et-al., 2010). Nine bilateral MD ROIs were localized using a validated spatial working-memory task (more vs. fewer locations; difficulty-adjusted for PWAs) (Fedorenko et-al., 2013). All ROIs were tested for the Sentences>Nonwords effect-size (in independent data) and for inter-regional correlations of resting-state activity time-courses. Left language ROIs were similar across PWAs and controls in their Sentences>Nonwords effect-size, its spatial extent, and their synchronized resting-state activity. Right language ROIs trended towards a stronger Sentences>Nonwords effect in PWAs than in controls, suggesting increased recruitment during language processing. However, their resting-state synchronization patterns remained unaltered. Strikingly, in PWAs, several MD ROIs showed a “language-like” Sentences>Nonwords effect opposite to the expected Nonwords>Sentences effect. Moreover, in PWAs, left inferior frontal gyrus (opercular) and precentral gyrus MD regions showed greater resting-state synchronization with the language network than with the MD network, violating the language-MD dissociation. Controls did not show these effects even if we limited their ROIs to fall outside the lesion sites of PWAs. Aditionally, results did not generalize to non-MD frontal voxels surrounding language ROIs. Therefore, some MD regions qualitatively alter their role within the networks engaged in language processing in PWAs, possibly supporting language recovery.

A new TMS-guided lesion-deficit mapping approach identifies brain areas where stroke damage impairs phonological processing

Diego L. Lorca-Puls1, Andrea Gajardo-Vidal1,2, Mohamed L. Seghier1,3, Alexander P. Leff1, Varun V. Sethi1, Susan Prejawa1,4, Thomas M. H. Hope1, Joseph T. Devlin1, Cathy J. Price1; 1University College London, 2Universidad del Desarrollo, 3Emirates College for Advanced Education, 4Max Planck Institute for Human Cognitive and Brain Sciences

INTRODUCTION: Transcranial magnetic stimulation (TMS) focused on either the left anterior supramarginal gyrus (SMG) or left pars opercularis (pOp) significantly increases response latencies during phonological relative to semantic tasks (Gough, Nobre, & Devlin, 2005; Sliwinska, James, & Devlin, 2015). Here we sought to establish whether the effects of these transient “virtual lesions” in healthy participants predict long term outcome of phonological abilities in patients with permanent damage to either SMG or pOp. METHODS: Our participants were 154 right-handed, English-speaking adults, 1-5 years after a left-hemisphere stroke selected from the PLORAS database, whose brain lesions were identified from high-resolution T1-weighted MRI scans using an automated procedure in SPM8. In order to establish how consistently our TMS-guided search for lesion sites identified patients with phonological difficulties, we, first, created two spherical regions of interest (ROIs) centred on the mean MNI coordinates reported in the TMS studies: xyz = [-52 -34 30] for SMG and [-52 16 8] for pOp, each with a radius of 5 mm (i.e. 0.5 cm3 in volume). Second, we grouped all the patients according to whether or not their lesions loaded heavily on either, both or neither (i.e. control group) of the TMS ROIs. And, finally, we compared the incidence and severity of difficulties on phonologically and semantically demanding tasks (from the Comprehensive Aphasia Test) across and within groups. RESULTS: The incidence and severity of phonological difficulties was significantly worse for patients in the SMG or pOp groups compared to the control group, but when lesion size was carefully matched across groups, only the difference between the SMG and control groups remained significant. Further investigation into the SMG and pOp lesions that were associated with phonological difficulties showed that these were far larger (>20 times) than the initial regions of interest from TMS. For 11/12 and 12/13 patients with phonological difficulties in the SMG and pOp groups, respectively, damage extended deep into the underlying white matter (WM). To quantify the importance of these extended regions (i.e. SMG+WM and pOp+WM), we reclassified all 154 patients according to whether or not their lesions loaded heavily on: SMG+WM, pOp+WM, both or neither. The incidence and severity of phonological difficulties in the SMG+WM or pOp+WM groups was significantly worse than that in the control group, even after accounting for the effect of lesion size. Moreover, the SMG+WM and pOp+WM lesions had a larger impact on phonologically than semantically demanding tasks; and the difference between phonological and semantic performance was significantly greater for both the SMG+WM and pOp+WM groups than the control group. CONCLUSIONS: We have shown how the effect of TMS-induced transient lesions in healthy participants guided the identification of lesion sites associated with persistent phonological difficulties 1-5 years post-stroke. Furthermore, the extent of the identified regions raises the possibility that the effect of TMS might be emerging from disruption to much larger areas than the recorded site of stimulation. Our novel TMS-guided analysis could be used to guide the search for lesion sites that predict language outcome after stroke.

Defining the importance of domain-general brain systems in language.

Magdalena Sliwinska1, Ines Violante1, Adam Hampshire1, Robert Leech1, Joseph Devlin2, Richard Wise1; 1Imperial College London, 2University College London

There is now good evidence indicating that domain-general brain systems play a central role in various language processes, and potentially in the recovery from post-stroke aphasia. Part of the so-called salience network, consisting of adjacent areas in the midline superior frontal gyrus (SFG) and dorsal anterior cingulate cortex (dACC), forms a core component of these distributed systems. This study was designed to collect proof-of-principle data on healthy individuals, prior to a study on post-stroke aphasia, to assess whether our novel vocabulary learning task activates the salience network and whether cortical stimulation applied to this midline frontal brain region can influence vocabulary learning. In the first part of the study, twenty volunteers participated in a fMRI study where they were asked to learn associations between well-known objects and pseudowords. Without prior training, participants were asked to judge whether a novel heard pseudoword was the ‘correct’ name for an object they saw on a screen. After each trial, participants received feedback on their performance, which trained them in the arbitrary object-word associations over repeated trials. Each participant completed four blocks of this task and in each block they learnt five different associations. Each block was organized into four mini-blocks to track the learning process. Participants also performed an easy baseline task in which they were asked to determine whether a picture of a common object was correctly paired with a heard real word. The baseline task also involved a ‘yes-no’ decision on object-word associations but in the absence of new vocabulary learning. Successful learning was clearly reflected in the behaviour data. There was a significant linear improvement of accuracy and response times across each block of the learning task and performance in the final stage of this task did not differ from performance in the control task. At the same time, neuroimaging data demonstrated significantly increased activation in the midline SFG/dACC when the learning task was contrasted with the baseline task. In addition, activation in this region decreased across trials as learning progressed. In the second part of the study, fifteen volunteers participated in three separate TMS sessions. During each session they received off-line TMS for 10 mins at a frequency of 1Hz, with intensity set to 55% of the maximum stimulator output. In the first session, TMS was applied to the midline SFG and participants performed the novel vocabulary-learning task. In the second session, participants received the same stimulation but they performed the baseline task. In the final session, participants performed the learning task, but after TMS had been applied to a more posterior midline control site. There was improved accuracy and faster responses in the first two mini-blocks of the learning task when stimulation was applied to the midline SFG. These results were specific to the learning task and stimulation to the midline SFG. This study clearly demonstrates the importance of the salience network during task-dependent language processes and it informs a future study in post-stroke aphasia, to determine whether stimulation of the SFG improves vocabulary relearning after aphasic stroke.

Imaging tDCS intervention effects in primary progressive aphasia

Kyrana Tsapkini1, Andreia Faria1, Ashley Harris2, Yenny Webb-Vargas3, Tushar Chakravarty1, Bronte Ficek1, Brenda Rapp4, John Desmond1, Richard Edden1, Constantine Frangakis3, Martin Lindquist3, Argye Hillis1; 1Johns Hopkins Medicine, 2University of Calgary, 3Johns Hopkins School of Public Health, 4Johns Hopkins University

Primary progressive aphasia (PPA) is a clinical neurodegenerative syndrome that first and foremost affects language abilities. Recent evidence supports beneficial effects of tDCS after 10-15 sessions of intervention; however, the neural mechanism for these effects remains unclear. We used a multi-modality imaging approach (Faria et al. Neuroimage 2012; 61(3) 613-621) to identify changes in functional and structural connectivity (as measured by resting-state fMRI and diffusion tensor imaging, respectively) associated with tDCS in individuals with PPA. Additionally, we tested the hypothesis that one of the mechanisms through which tDCS works is by lowering GABA (an inhibitory neurotransmitter) in the stimulated area, as has previously been shown in healthy controls after a single application during motor learning (Stagg et al. Current Biology 2011; 21(6) 480-484). . We report on volumetric (MPRAGE), resting-state fMRI (rsfMRI) and diffusion tensor imaging (DTI) data from 13 participants before, immediately after, and two months after intervention. The intervention was anodal tDCS at the left inferior frontal gyrus (IFG) or sham stimulation for 15 daily sessions coupled with language therapy targeting written word production (spelling). We also report on GABA measurements in the left IFG (stimulated area) vs. right sensory-motor cortex (control area) in 19 participants before vs. after anodal tDCS vs. sham. The rsfMRI analysis revealed no differential change in connectivity from pre- to post-treatment for the tDCS condition compared to sham as measured by the correlation between the stimulated area and the rest of the spelling ROIs within the left hemisphere. However, functional connectivity between the left and right IFG increased significantly more in the tDCS condition relative to sham. This connectivity between the right and left IFG positively correlated with volume of right IFG and with improvement in behavioral scores even 2 months post-treatment. Functional anisotropy (FA) in white matter beneath both right and left IFG did not change due to either tDCS or sham. GABA significantly decreased in the left IFG but not in the right sensory-motor cortex, and only in the tDCS condition, whereas it remained the same in sham. In this study we identified two possible brain mechanisms for the effects of tDCS in PPA. First, we showed the contribution of GABA in long-term effects of tDCS: multiple consecutive applications of tDCS are correlated to reductions in GABA in the stimulated area in PPA. We also demonstrated that a possible brain mechanism for the advantage of tDCS over sham may be the strengthening of the functional connectivity between the stimulated area (left IFG) and homologous right IFG. The predictive value of the volume of right IFG for tDCS effectiveness has important implications for the prognosis and timing of intervention effects: intervention at earlier stages of the disease progression—when there is less atrophy in the right hemisphere—may be more beneficial than later intervention.