Slide Sessions

Session #

Session Title

Time & Location

Session A

Network Development and Reorganization

Thursday, October 15, 1:30 - 2:50 pm
Grand Ballroom

Session B

Perspectives on Language Processing

Friday, October 16, 3:30 - 4:50 pm
Grand Ballroom

Session C

Outside the Left Peri-Sylvian Cortex

Saturday, October 17, 8:00 - 9:20 am
Grand Ballroom

Slide Session A

Thursday, October 15, 1:30 - 2:50 pm, Grand Ballroom

Network Development and Reorganization

Chair: TBD
Speakers: Frank Eisner, Łukasz Bola, Fatemeh Geranmayeh, Dorian Pustina

The effect of literacy acquisition on cortical and subcortical networks: A longitudinal approach

Frank Eisner1, Uttam Kumar2, Ramesh K Mishra3, Viveka Nand Tripathi4, Anupam Guleria2, Prakash Singh4, Falk Huettig5; 1Radboud University, 2Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, 3University of Hyderabad, 4University of Allahabad, 5Max Planck Institute for Psycholinguistics, 6

How do human cultural inventions such as reading result in neural re-organization? Previous cross-sectional studies have reported extensive effects of literacy on the neural systems for vision and language (Dehaene et al [2010, Science], Castro-Caldas et al [1998, Brain], Petersson et al [1998, NeuroImage], Carreiras et al [2009, Nature]). In this first longitudinal study with completely illiterate participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, mainly left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area), thalamus (pulvinar), and cerebellum. Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These positive effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. Contrary to previous research, we found no direct evidence of literacy affecting the processing of other types of visual stimuli such as faces, tools, houses, and checkerboards. Furthermore, unlike in some previous studies, we did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. We conclude that learning to read has a specific and extensive effect on the processing of written text along the visual pathways, including low-level thalamic nuclei, high-level systems in the intraparietal sulcus and the fusiform gyrus, and motor areas. The absence of an effect of literacy on responses in the auditory cortex in particular raises questions about the extent to which phonological representations in the auditory cortex are altered by literacy acquisition or recruited online during reading.

Massive cortical reorganization in sighted braille readers

Łukasz Bola1,2,9, Katarzyna Siuda-Krzywicka1,3,9, Małgorzata Paplińska4, Ewa Sumera5, Katarzyna Jednoróg2, Artur Marchewka2, Magdalena Śliwińska6, Amir Amedi7,8, Marcin Szwed1; 1Jagiellonian University, Krakow, Poland, 2Nencki Institute of Experimental Biology, Warsaw, Poland, 3École des Neurosciences à Paris, Paris, France, 4Academy of Special Education in Warsaw, Poland, 5Institute for the Blind and Partially Sighted Children in Krakow, Poland, 6University College London, UK, 7The Hebrew University of Jerusalem, Israel, 8Sorbonne Universite´s, UPMC Univ Paris 06, Paris, France, 9Equally contributing authors

Neuroplasticity in the adult brain is thought to operate within the limits of sensory division, where the visual cortex processes visual stimuli and responds to visual training, the tactile cortex processes tactile stimuli and responds to tactile training, and so on. A departure from this rule is reported to be possible mainly during the large-scale reorganization induced by sensory loss or injury. The ventral visual cortex, in particular, is activated in blind subjects who read braille, and lesions of this area impair braille reading. Thus, this part of the visual cortex has the innate connectivity required to carry out a complex perceptual task – reading – in a modality different than vision. However, it is presumed that this connectivity has been pruned during years of visual experience. Here we show, that contrary to this presumption, the ventral visual cortex can be recruited for tactile reading even in sighted adults. 29 subjects (3 male, 26 female, mean age = 29) – mostly braille teachers and educators, naïve in tactile braille reading – participated in a 9-month tactile braille reading course. At the beginning and at its end, they underwent an fMRI experiment consisting of tactile braille reading and suitable control conditions (e.g. touching nonsense braille, imaging braille reading). Additionally, in both scanning sessions resting-state fMRI (rsfMRI) data were collected. At the end of the course, 9 subjects were also tested in a Transcranial Magnetic Stimulation (TMS) experiment. Almost all subjects learned tactile braille reading and reached reading speeds comparable to blind 2nd grade children. Before-course fMRI experiment showed no significant activity specific to braille reading. After the course, however, subjects showed enhanced activity for tactile reading in the ventral visual cortex, including the Visual Word Form Area (VWFA), that was modulated by their braille reading speed. Control conditions’ results indicated that this visual cortex activity could not be explained by visual imagery. In rsfMRI analysis, we observed increased functional connectivity between the VWFA and the left primary somatosensory cortex. Finally, TMS applied to the VWFA decreased accuracy of tactile word reading in a lexical decision task. Such effect was not observed during TMS stimulation of control regions. Our results demonstrate that cross-modal plasticity is possible even in the healthy, adult brain. To date, only few experiments suggested such a possibility, and none of them managed to confirm that such cortical changes are behaviorally relevant. Our study used a controlled, within-subject design and precise behavioral measures supplemented with a causal method, TMS. Its results suggest that large-scale plasticity is a viable, adaptive mechanism recruited when learning complex skills. This calls for a re-assessment of our view of the functional organization of the brain.

Network dysfunction predicts speech production after left-hemisphere stroke.

Fatemeh Geranmayeh1, Robert Leech1, Richard J. S. Wise1; 1Computational Cognitive and Clinical Neuroimaging Laboratory, Imperial College, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, UK.

INTRODUCTION: Recovery after a stroke resulting in aphasia is usually discussed only in terms of domain-specific functions, namely phonology, semantics and syntax. This is often coupled with speculations that intact ipsilesional or contralesional regions ‘take over’ these functions1. However, domain-general processes also have a role, with some evidence that anterior midline frontal cortex may support residual language function (1,2). Within the restricted volume of this region there are anatomically overlapping but functionally separate components that constitute nodes within multiple distributed cognitive brain networks. These include a left and right fronto-temporo-parietal, cingulo-opercular, and the default-mode networks. The default-mode network supports ‘internally directed’ cognition, and becomes less active when participants are engaged in externally directed stimulus-response tasks (3,4). Activity in this network is modulated by speech comprehension and production (5,6). METHODS: In the present functional MRI study, the effects of a previous left hemisphere stroke on brain activity were investigated as patients described pictures. The design included various baseline tasks, including counting, non-verbal target detection, and a rest baseline (7). The results were related to healthy participants performing the same tasks. The analyses investigated not only local speech-related activity, but also functional connectivity both within and between distributed networks using independent component analyses and psychophysiological interaction analyses. A multiple regression model identified network predictors of speech production. RESULTS: The patients showed an upregulation of the activity in the cingulo-opercular network during propositional speech task, in keeping with the upregulation of activity in this network when task demands are increased (P <0.05) (1,2). Although activity within individual networks was not predictive of speech production, the relative activity between networks was a predictor of both within-scanner and out-of-scanner performance, over and above that predicted from lesion volume and various demographic factors. Specifically, the robust functional imaging predictors were the differential activity and functional connectivity between the default mode network and the left fronto-temporo-parietal network (Beta = 0.54, P <0.001), and the default mode network and the right fronto-temporo-parietal network (Beta = -0.50, P <0.001). The speech-specific functional connectivity between these networks was significantly alerted in patients compared to controls. CONCLUSION: The demonstration that speech production is dependent on complex interactions within and between widely distributed brain networks, indicates that recovery depends on more than the restoration of local domain-specific functions. This argues that the systems neuroscience of recovery of function after focal lesions is not adequately captured by notions of brain regions ‘taking over’ lost domain specific functions, but is best considered as the interaction between what remains of domain-specific networks and the domain-general systems that regulate behaviour. REFERENCES: 1. F Geranmayeh et al. Brain. 2014;137:2632–2648. 2. SLE Brownsett et al. Brain. 2014;137:242–254. 3. GF Humphreys et al. doi: 10.1073/pnas.1422760112. 4. ME Raichle et al. PNAS 2001;98(2):676–682. 5. M Regev M et al. J Neurosci. 2013;33(40):15978–15988. 6. M Awad et al. 2007;27(43):11455–11464. 7. F Geranmayeh et al. J Neurosci. 2014;34(26):8728–8740.

A supervised framework for lesion segmentation and automated VLSM analyses in left hemispheric stroke

Dorian Pustina1,3, Branch Coslett1, Myrna Schwartz4, Brian Avants2,3; 1Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA, 2Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA, 3Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, PA, USA, 4Moss Rehabilitation Research Institute, Elkins Park, PA, USA.

INTRODUCTION: Voxel-based lesion-symptom mapping (VLSM) is conventionally performed using skill and knowledge of experts to manually delineate brain lesions. This process requires time, and is likely to have substantial inter-rater variability. Here, we propose a supervised machine learning framework for lesion segmentation capable of learning the relationship between existing manual segmentations and a single T1-MRI volume in order to automatically delineate lesions in new patients. METHODS: Data from 60 aphasic patients with chronic left-hemispheric stroke were utilized in the study (age: 57.2±11.5yrs, post-stroke interval: 2.6±2.7yrs, 26 female). Lesion prediction was obtained in ANTsR (Avants, 2015) using the MRV-NRF algorithm (multi-resolution voxel-wise neighborhood random forest; Tustison et al., 2014) which relied on multiple features created from the T1-weighted MRI; i.e., difference from template, tissue segmentation, brain asymmetries, gradient magnitude, and deviances from 80 age and gender matched controls. To establish whether a voxel is lesioned, the algorithm learns the pattern of signal variation on these features in hierarchical steps from low to high resolution, considering both the voxel itself and its neighbors. A fully automatic pipeline was achieved by running iterative cycles of “register-predict-register”, where each registration improved gradually by removing the previous prediction from computations. Each case was predicted with a leave-one-out procedure using the predictive model trained on the other 59. Comparison with manual tracings was performed with standard metrics, while parallel VLSM models were built with manual and predicted lesions on 4 language measures: WAB subscores for repetition and comprehension (Kertesz, 1982), WAB-AQ, and PNT naming accuracy (Roach et al., 1996). RESULTS: The dice overlap between manual and predicted lesions was 0.70 (STD ±0.15). The correlation of lesion volumes was r=0.95 (p<0.001). The case-wise maximum displacement (Hausdorff) was 17mm (±8mm), and the area under the ROC curve was 0.87 (±0.1). Lesion size correlated with overlap (r=0.54, p<0.001), but not with maximum displacement (r=-15, p=0.27). VLSM thresholded t-maps (p<0.05, FDR corrected) showed a continuous dice overlap of 0.75 for AQ, 0.81 for repetition, 0.57 for comprehension, and 0.58 for naming. To investigate whether the mismatch between manual VLSM and automated VLSM involved critical areas related to cognitive performance, we created behavioral predictions from the VLSM models. Briefly, a prediction value was obtained from each voxel and the weighted average of all voxels was computed (i.e., voxels with high t-value contributed more to the prediction than voxels with low t-value). Manual VLSM showed slightly higher correlation of predicted performance with actual performance compared to automated VLSM (respectively, AQ: 0.65 and 0.60, repetition: 0.62 and 0.57, comprehension: 0.53 and 0.48, naming: 0.46 and 0.41). The difference between the two, however, was not significant (lowest p=0.07). CONCLUSIONS: These findings show that automated lesion segmentation is a viable alternative to manual delineation, producing similar lesion-symptom maps and similar predictions with standard manual segmentations. The proposed algorithm is flexible with respect to learning from existing datasets, provides an automatic registration to template, and exceeds the prediction accuracy of current methods used in big data studies (i.e., PLORAS; Seghier et al., 2008).

Slide Session B

Friday, October 16, 3:30 - 4:50 pm, Grand Ballroom

Perspectives on Language Processing

Chair: TBD
Speakers: Erika Hussey, Velia Cardin, Harm Brouwer, Greig de Zubicaray

HD-tDCS of left lateral prefrontal cortex improves garden-path recovery

Erika Hussey1, Nathan Ward1, Kiel Christianson1, Arthur Kramer1; 1University of Illinois at Urbana-Champaign

Recent research demonstrates that performance on executive control measures can be enhanced through brain stimulation of left lateral prefrontal cortex (LPFC; Berryhill et al., 2014; Coffman et al., 2014). Separate psycholinguistic work emphasizes the importance of left LPFC executive control resources during sentence processing (Ye & Zhou, 2009). This is especially the case when readers or listeners must ignore early, incorrect interpretations when faced with temporary ambiguity (i.e., garden-path recovery; Novick et al., 2005). Using high-definition transcranial direct current stimulation (HD-tDCS), we tested whether temporarily increasing cortical excitability of left LPFC had discriminate effects on language and memory conditions that rely on executive control (versus cases with minimal executive control demands, even in the face of task difficulty). Participants were randomly assigned to receive Active (anodal: n=27) or Control stimulation (sham: n=27; cathodal: n=26) of left LPFC while they (1) processed syntactically ambiguous and unambiguous sentences (see Christianson et al., 2001) in a non-cumulative self-paced moving-window paradigm, and (2) performed an n-back recognition memory task that, on some trials, contained interference lure items reputed to require executive control (Oberauer, 2005). Across both tasks, we parametrically manipulated executive control demands and task difficulty to disentangle these mechanistic contributions (see Fedorenko, 2014). Difficulty was introduced by varying the length of pre-critical sentence regions during the reading task (Witzel et al., 2012) and changing the number of to-be-remembered n-back items (Owen et al., 2005). Mixed-effects models revealed that the Active group outperformed Controls on (1) the sentence processing conditions requiring executive control, and (2) only difficult n-back conditions regardless of executive control demands. Specifically, the Active group demonstrated superior comprehension accuracy to questions following ambiguous sentences (t=2.449, p=0.01) and faster reading time of disambiguating sentence information of long sentences (t=2.124, p=0.03). On n-back, the Active group had better target/non-target discriminability at higher n-levels relative to Controls (t=2.066, p=0.04). These findings replicate tantalizing results from neuropsychological patients with focal insult to left LPFC (Novick et al., 2010) and functional neural coactivation in healthy adults (Hsu et al., 2013; January et al., 2008) during garden-path recovery and recognition of interfering memoranda. Additionally, our results suggest a potential causal role of left LPFC-mediated executive control for garden-path recovery. Finally, we provide initial evidence suggesting that brain stimulation may be a promising method to mitigate sentence processing demands in healthy adults.

Does the superior temporal cortex have a role in cognitive control as a consequence of cross-modal reorganization?

Velia Cardin1,2, Mary Rudner2, Rita De Oliveira3, Merina Su4, Josefine Andin2, Lilli Beese1, Bencie Woll1, Jerker Ronnberg2; 1Deafness Cognition and Language Research Centre, Department of Experimental Psychology, University College London, 49 Gordon Square, London WC1H 0PD., 2Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden., 3School of Applied Science, London South Bank University, 103 Borough Road, London SE1 0AA, 4Institute of Child Health, University College London

Cortical cross-modal reorganization in humans is the result of an interplay between sensory and cognitive factors. Congenital deafness provides a unique model to understand the contribution of each of these factors, given that neural reorganization is not only caused by sensory deprivation, but also by the use of language in a visual modality (i.e. sign language and lipreading). Working memory is the limited cognitive capacity available for on-line processing and temporary storage of information (Baddeley, 2003). Behavioral studies have shown an advantage in performance in visual working memory in deaf individuals, suggesting that auditory deprivation may result in enhanced or different neural resources for cognitive processing. To address this question, we characterized plastic changes driven by auditory deprivation and sign language experience in the neural substrates supporting visual working memory. We conducted a functional magnetic resonance imaging (fMRI) experiment with three groups of participants: deaf native signers, hearing native signers and hearing non-signers. Participants performed a 2-back working memory task, and a control task, on two sets of stimuli: signs from British Sign Language or moving non-sense objects. Stimuli were presented as point-light displays to control for differences in visual features. We replicated previous findings showing stronger activations in deaf signers for all stimuli and tasks in the right posterior superior temporal cortex (STC) – a cross-modal plasticity effect for visuospatial processing driven by auditory deprivation. The group of deaf signers also showed stronger bilateral STC activation for sign language stimuli, showing that his region traditionally thought to be involved in speech processing, has a multimodal role in language processing. Our results show characteristic activations in a fronto-parietal network for working memory in all groups. However, the group of deaf participants also recruited bilateral STC during the working memory task, but not during the control task, independently of the linguistic content of the stimuli. This was accompanied by a reduction in the recruitment of parietal and frontal regions typically associated with working memory in hearing individuals. Using resting state connectivity analysis, we also found a difference in the pattern of connectivity between frontal, parietal and superior temporal cortex between the group of deaf signers and each of the groups of hearing individuals. This suggests a functional shift towards cognitive control in superior temporal cortex as a consequence of cross-modal reorganization.

The Electrophysiology of Language Comprehension: A Neurocomputational Model

Harm Brouwer1, John Hoeks2, Matthew Crocker1; 1Saarland University, 2University of Groningen

We present a neurocomputational model of the electrophysiology of language processing. Our model is explicit about its architecture and the computational principles and representations involved. It is effectively a recurrent neural network (of the ‘Elman’-type; [1]) that directly instantiates a parsimonious functional-anatomic processing network linking the N400 and the P600—the two most salient language-related ERP components—to two computational epicenters in the perisylvian cortex [2,3]. The computational model constructs a situation model of the state-of-the-affairs described by a sentence on a word-by-word basis. Each word leads to a processing cycle centred around two core operations. First, the meaning of an incoming word is retrieved/activated, a process that is mediated by the left posterior part of the Middle Temporal Gyrus (lpMTG; BA 21), and the ease of which is reflected in N400 amplitude. Next, the left Inferior Frontal Gyrus (lIFG; BA 44/45/47) integrates this retrieved word meaning with the current situation model into an updated situation model, which is then connected back to the lpMTG to provide a context for the retrieval of the next word. The effort involved in situation model updating is indexed by P600 amplitude. We discuss our model, and show that it accounts for the pattern of N400 and P600 modulations across a wide range of processing phenomena, including semantic anomaly, semantic expectancy (on nouns and articles [4]), syntactic violations, and garden-paths. Critically, our model also captures the ‘semantic P600’-phenomenon, which has spawned a considerable amount of debate [see 2,5,6]. This is exemplified by a simulation of an ERP experiment contrasting different types of semantic anomalies in Dutch [7]: Control: ‘The javelin was by the athletes thrown’ (literal translation); Reversal: ‘The javelin has the athletes thrown’ (P600-effect relative to Control); Mismatch_Pas: ‘The javelin was by the athletes summarized’ (N400/P600-effect), and Mismatch_Act: ‘The javelin has the athletes summarized’ (N400/P600-effect). Statistical evaluation of our simulation results (within-items RM-ANOVA with Huynh-Feldt correction where necessary) showed a perfect replication of the original findings. For the N400, there was a main effect of Condition (F(3,27)=45.1; p<.001), and pairwise comparisons (Bonferroni corrected) showed that the N400-effect was absent in reversal sentences (p=.47), while there was a significant N400-effect for the mismatch conditions (p-values<.005). As for the P600, there was a main effect of Condition (F(3,27)=136.5; p<.001), and pairwise comparisons showed a P600-effect for all three anomalous conditions (p-values<.001). The implications of our model will be discussed, and we will argue that explicit computational models and quantitative simulations are generally superior to verbal ‘box-and-arrow’ accounts, and necessary for settling theoretical debates, such as the one concerning the `semantic P600’-phenomenon. References: [1] Elman (1990); [2] Brouwer et al. (2012); [3] Brouwer and Hoeks (2013); [4] DeLong et al. (2005); [5] Kuperberg (2007); [6] Bornkessel-Schlesewsky and Schlesewsky (2008); [7] Hoeks et al. (2004).

A sound explanation for the motor cortex representations of action words

Greig de Zubicaray1, Katie McMahon2, Joanne Arciuli3; 1Queensland University of Technology, Brisbane, Australia, 2University of Queensland, Brisbane, Australia, 3University of Sydney, Sydney, Australia

Language processing is an example of implicit learning of multiple statistical cues that provide probabilistic information regarding word structure and use. Much of the current debate about language embodiment is devoted to how action words are represented in the brain, with motor cortex activity evoked by these words assumed to selectively reflect conceptual content and/or its simulation. However, there is a substantial body of psycholinguistic research demonstrating that the degree to which a word’s phonology is typical of other words in its grammatical category influences online processing, particularly for verbs and nouns. Using fMRI in healthy participants (N=17) and an auditory lexical decision task (LDT), we found that monosyllabic verbs (e.g., bite, grasp, walk) denoting body-part specific (i.e., face, arm, leg) actions evoked differential motor cortex activity. This result is typically interpreted in support of language embodiment. Crucially, we conducted two additional sets of analyses that demonstrated this activity is due to phonological rather than conceptual processing. The first included a measure of the action words’ phonological typicality (calculated by subtracting the average verb distance for a word from its average noun distance; Monaghan, Christiansen, Farmer, & Fitneva, 2010). This revealed a gradient of phonological typicality for the action word types (face < arm < leg) that was associated with a significant parametric modulation of activation across both premotor and primary motor cortices. A second set of conjunction analyses showed that monosyllabic nonwords matched to the action words in terms of phonotactic probability (a measure of the frequency in which phonological segments occur in a given position in a word; Vitevitch & Luce, 2004) evoked similar “body-part specific” activity in identical motor areas. Thus, motor cortex responses to action words cannot be assumed to selectively reflect conceptual content and/or its simulation. Our results clearly demonstrate motor cortex activity reflects implicit processing of phonological statistical regularities that are typically unaccounted for in studies of language embodiment.

Slide Session C

Saturday, October 17, 8:00 - 9:20 am, Grand Ballroom

Outside the Left Peri-Sylvian Cortex

Chair: TBD
Speakers: Daniela Sammler, Jonathan H. Drucker, Zarinah Agnew, Nathaniel Klooster

Dual streams for prosody in the right hemisphere

Daniela Sammler1,2, Marie-Hélène Grosbras2,3, Alfred Anwander1, Patricia E. G. Bestelmeyer2,4, Pascal Belin2,3,5; 1Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, 2Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK, 3Institut des Neurosciences de La Timone, CNRS and Université Aix-Marseille, France, 4School of Psychology, Bangor University, Bangor, UK, 5BRAMS, University of Montréal and McGill University, Montréal, Canada, 6

Our vocal tone—the prosody—contributes a lot to the meaning of speech beyond the actual words. Indeed, the hesitant tone of a ‘yes’ may be more telling than its affirmative lexical meaning. The human brain contains dorsal and ventral processing streams in the left hemisphere that underlie core linguistic abilities such as phonology, syntax and semantics. Whether or not prosody—a reportedly right-hemispheric faculty—involves analogous processing streams is a matter of debate. Functional connectivity studies on prosody leave no doubt about the existence of such streams, but opinions diverge on whether information travels along dorsal or ventral pathways, or both. Here we show, in a novel paradigm using audio morphing of prosody combined with functional/diffusion-weighted neuroimaging (fMRI/DWI; Experiment 1) and transcranial magnetic stimulation (TMS; Experiment 2), that prosody perception takes dual routes along dorsal and ventral pathways in the right hemisphere. In Experiment 1, categorization of speech stimuli that gradually varied in their prosodic pitch contour (between statement and question) involved (i) an auditory ventral pathway along the middle longitudinal fascicle in the superior temporal lobe, and (ii) an auditory-motor dorsal pathway connecting posterior temporal and laryngeal premotor/inferior frontal areas via the arcuate/superior longitudinal fascicle. In Experiment 2, 15 minutes of inhibitory repetitive TMS of right (but not left) laryngeal premotor cortex as a key node of the dorsal pathway decreased participants’ performance in prosody categorization (but not in a control task), arguing for a motor involvement in prosody perception. Following prevailing dual-stream models of language, we propose that prosody perception resides on complementary mechanisms implemented in ventral and dorsal streams in the right hemisphere: While the ventral pathway may extract and integrate auditory features into a time-invariant “prosodic Gestalt” (‘What’) to map prosody to communicative meaning, the dorsal pathway is more likely to map the perceived pitch contour to (subvocal) articulation (‘How’) to enhance the perception of subtle vocal prosodic cues. In sum, our data draw a dual-stream picture of prosodic processing that shows plausible analogies to the established left-hemispheric multi-stream architecture of language, but with relative rightward asymmetry.

Does right frontal activity help or hurt word retrieval?

Jonathan H. Drucker1,2, Keith M. McGregor1,2, Charles M. Epstein2, Bruce Crosson1,2,3,4; 1Atlanta VA Center of Excellence for Visual and Neurocognitive Rehabilitation, 2Emory University, 3Georgia State University, 4University of Queensland

Neural activity in the left frontal lobe is a hallmark of language processing, but older adults demonstrate right frontal activity as well (Wierenga et al., 2008). Increased right frontal activity in older adults, specifically in pars triangularis of the inferior frontal gyrus (PTr), is associated with poorer performance in word retrieval tasks (Meinzer et al., 2009; 2012). This phenomenon has yet to be explained. One hypothesis posits that increased right frontal activity in older adults is compensatory, mitigating age-related decline in language function. Alternatively, we suggest that increased right frontal activity in older adults is competitive with language function, reflecting diminished interhemispheric suppression. In aphasia, evidence for the competition hypothesis comes from patients with nonfluent aphasia undergoing low-frequency (1Hz) repetitive transcranial magnetic stimulation (rTMS). Suppression of right frontal (PTr) cortical excitability using 1Hz rTMS leads to faster and more accurate word retrieval in nonfluent aphasia patients (Naeser et al., 2005; 2011; Barwood et al., 2011). A parsimonious interpretation is that activity in right PTr was competitive, not compensatory, and that inhibiting this activity facilitated word retrieval. We address two related questions in the current experiment. First, does rTMS suppression of right PTr help or hurt word retrieval in healthy older adults? Second, is 1Hz rTMS facilitation of language unique to stroke patients, or does it address a more general component of the aging process? To date, we have recruited 17 neurologically normal, right-handed adults. Nine were between the ages of 65-89 (older: 8f, 1m), and eight were between the ages of 20-34 (younger: 3f, 5m). Ten minutes of low-frequency (1Hz) rTMS was applied to the experimental area of cortex (right PTr) or to a neighboring control area (right pars opercularis: POp). Sham rTMS was also applied for comparison. Immediately after real or sham rTMS, participants named 30 pictures presented on a computer screen. Reaction times for picture naming were calculated offline. Each participant experienced each of the four conditions, divided into two sessions on different days. After controlling for differences in performance between participants and in the different picture items, average response times in the real-PTr condition were compared against real-POp (controlling for location in the brain) and against sham-PTr (controlling for psychological or other non-neural effects of rTMS), for both the older and younger age groups. Older participants exhibited faster word retrieval after rTMS to PTr than they did after real rTMS to POp (location control: Δ = 139ms, p = .017) or sham rTMS (placebo control: Δ = 155ms, p = .002). In the younger group, there was no significant difference (p = .333 and p = .081, respectively). These results suggest that increased neural activity in the right pars triangularis is competitive with language function in healthy older adults, and that the ability to suppress this activity decreases as part of the normal aging process. The differences we observed between the age groups suggest that rTMS as a therapy for nonfluent aphasia could be more effective for older than for younger patients.

Investigating the role of cerebellum in sensory processing during vocal behavior with theta burst stimulation

Zarinah Agnew1, Jeevit Gill1, Srikantan Nagarajan2, Richard Ivry3, John Houde1; 1University of California San Francisco, Department of Otolaryngology, 2University of California San Francisco, Department of Radiology, 3University of California Berkeley

The present collection of studies aimed to investigate the nature of auditory feedback processing in patients with cerebellar degeneration by measuring various aspects of vocal behaviour. It has been proposed that the cerebellum serves to generate predictions about the sensory consequences of future movements. As such, complete, or over reliance on sensory feedback is thought to result in unstable movements. In line with this thinking, patients with cerebellar damage, such as cerebellar ataxia, are known for their deficits in visually guided movement and their movements are known to improve in the absence of visual feedback. Thus it is suggested that patients with damage to the cerebellum are less able to make accurate predictions about the sensory consequences of movements and have to rely on reafferent information which ultimately leads to unstable movements. Here we report a series of four separate sets of data, which together identify a clear role for the cerebellum in feedback processing during vocal behaviour. In order to assess vocal behaviour in this patient group, we designed auditory-motor experiments which paralleled visually guided reaching tasks. Two sets of patients with cerebellar damage were tested on a battery of vocal assessments designed to probe different aspects of vocalisation: we investigated ability to produce spontaneous voicing, pitch tracking of a moving pitch target and pitch perturbation. We investigated the hypothesis that reducing auditory feedback during vocalisation would improve vocal stability, showing that under auditory masking conditions, variability in vocal pitch is significantly reduced in patients with cerebellar damage. In order to investigate this idea further, a third experiment was carried out where we investigated how patients responded to perturbations in pitch production whereby auditory feedback is pitch shifted during vocalisation. As predicted, patients with cerebellar damage displayed significantly altered responses to the pitch shift compared to healthy age matched controls indicating an alteration in the way reafferent information is utilised. Finally continuous theta burst stimulation to cerebellar cortex in healthy controls confirmed a role for cerebellar processing in compensation for an imposed shift in auditory feedback. Together, these sets of experiments provide compelling evidence in favour of the idea of the cerebellum as a prediction system, the dysfunction of which leads to over reliance on sensory feedback and hence unstable auditorily guided vocal movements. These data will be discussed in relation to the function of the cerebellum in the neural control of vocal behaviour and current models of speech production.

Impoverished remote semantic memory in hippocampal amnesia

Nathaniel Klooster1, Melissa Duff1,2,3; 1Neuroscience Graduate Program, 2Department of Communication Sciences and Disorders, 3Department of Neurology, University of Iowa

There has been considerable debate regarding the necessity of the hippocampus for acquiring new semantic concepts. It is generally accepted, however, that any role the hippocampus plays in semantic memory is time limited and that previously acquired information becomes independent of the hippocampus over time through neocortical consolidation. This view, along with intact naming and word-definition matching performance in amnesia, has led to the notion that remote semantic memory is intact in patients with hippocampal amnesia. Motivated by perspectives of word learning as a protracted process where additional features and senses of a word are added over time, and by recent discoveries about the time course of hippocampal contributions to on-line relational processing, reconsolidation, and the flexible integration of information, we revisit the notion that remote semantic memory is intact in amnesia. We tested 1) 5 patients with bilateral hippocampal damage (HC) and severe declarative memory impairment, 2) a group of 6 brain damaged comparison (BDC) participants with bilateral damage to the ventromedial prefrontal cortex, and 3) demographically matched non-brain damaged healthy comparison participants (NCs). In psycholinguistic studies, the number of features of a concept (e.g. a cucumber is a vegetable, has green skin, is cylindrical, grows on vines, grows in gardens, is used for making pickles, etc.) is an often-used measure of semantic richness. We chose target words from normed databases and gave participants 2 minutes to list as many features for each target as possible. NCs and BDCs performed indistinguishably from each other, producing twice as many features on average as the HC group. The number of senses a word can take (e.g. shot: medical injection; sports attempt; gunfire; small serving of whiskey) is another commonly used psycholinguistic measure of semantic richness. We chose target words from normed databases and gave participants 1 minute to list as many senses of each target word as possible. Again, amnesics produced significantly fewer senses than NCs and BDCs. The Word Associate Test (WAT) is a receptive measure of vocabulary depth. The test presents participants with 40 target adjectives and requires them to pick 4 correct associates or collocates from among 8 possibilities per target. Consistent with previous measures, the NCs and BDCs performed indistinguishably from each other and significantly higher than the HC group on the WAT. On both productive and receptive measures of vocabulary depth and semantic richness, we find that a group of hippocampal amnesic participants display impoverished remote semantic memory compared to demographically matched healthy participants and brain- damaged comparison participants. The performance of the BDC group, which did not differ from NCs, suggests that the observed deficits are attributable to hippocampal damage and are not a consequence of brain damage more generally. These findings suggest a reconsideration of the traditional view that remote semantic memory is fully intact following hippocampal damage. The impoverished remote semantic memory in patients with hippocampal amnesia suggests that the hippocampus plays a role in the maintenance and updating of semantic memory beyond its initial acquisition.