Slide Sessions

Session #

Session Title

Time & Location

Session A

Speech Processing

Wednesday, August 27, 4:30 - 5:50 pm
Effectenbeurszaal

Session B

Language Evolution and Brain Structure

Thursday, August 28, 8:30 - 9:50 am
Effectenbeurszaal

Session C

Combinatorial Processing: Syntax, Semantics, Pragmatics

Friday, August 29, 8:30 - 9:50 am
Effectenbeurszaal

Session D

Lexical Processing and Cognitive Control

Friday, August 29, 1:00 – 2:20 pm
Effectenbeurszaal



Slide Session A

Wednesday, August 27, 4:30 - 5:50 pm, Effectenbeurszaal

Speech Processing

Chair: Heather Bortfeld
Speakers: Rutvik Desai, David Corina, Simon Fischer-Baum, James R. Monette

Motor coordination predicts literal and figurative action sentence processing in stroke

Rutvik Desai1, Troy Herter1, Chris Rorden1, Julius Fridriksson1; 1University of South Carolina

Introduction: Considerable evidence exists for the involvement of sensory and motor systems in concept representation. Crucial questions now concern the precise nature of this involvement. Actions involve many levels of processing, from details of specific movements such as direction and speed, to higher level planning and coordination. If action concepts are grounded in motor systems, what roles do these different levels of representations play? Here, we investigated action performance and action semantics in a cohort of 40 stroke patients in order to examine their relationship. Methods: Subjects performed two action tasks using a planar endpoint robot in an augmented reality environment. One action task required subjects to use paddles attached to each hand to hit as many objects as possible as the objects moved towards the subjects in the horizontal plane (Object Hit task). A second task was similar, except that subjects only hit objects of certain shape and avoided hitting objects of other shapes (Object Hit and Avoid task). Both tasks require bimanual coordination for better performance. We examined Hit Bias (bias in the hand used for hits) and Movement Area Bias (bias in the movement area of the hands). A high bias on either measure indicates a lack of bimanual coordination. Subjects were tested separately on a semantic task, in which they made meaningfulness judgments on sentences with action or abstract verbs. Three kinds of action sentences were used: literal action (They boy lifted the pebble from the ground), metaphoric action (The discovery lifted the nation out of poverty), and idiomatic action (The country lifted the veil on its nuclear program). These three conditions represent levels of abstraction in action meaning, in that literal sentences describe physical actions, idiomatic sentences convey an abstract meaning through a formulaic phrase that uses the same action verb, while non-idiomatic metaphors are at an intermediate level. Abstract sentences (The discovery eliminated poverty in the country) served as controls. One hundred meaningful sentences (25 in each condition) and 50 nonsense sentences were presented aurally in random order, and subjects gave a yes/no response to each with a buttonpress. We computed scores representing the difference between accuracy in each action condition and the Abstract condition. These difference scores were correlated with measures from the two action tasks using Spearman’s correlation. Results: We found that the difference score for each of the action conditions was correlated with bias measures in both tasks, such that a higher bias (reduced bimanual coordination) predicted action-specific reduction in sentence processing accuracy. The overall score in the action tasks showed no correlation. Conclusions: These results show that a higher order action parameter, bimanual coordination, is strongly associated with action semantics in the context of sentence processing. Furthermore, this role persists even when action sentences are metaphoric or idiomatic, and convey an abstract meaning. Thus, higher order action systems of the brain play a causal role in both literal and figurative action sentence semantics, and provide grounding for conceptual content.

Limb Apraxia in American Sign Language

David Corina1, Svenna Pedersen2, Cindy Faranady2, Corianne Rogalsky3, Gregory Hickok4, Ursula Bellugi2; 1University of California, Davis, 2The Salk Institute for Biological Studies, 3Arizona State University, 4University of California, Irvine

Limb apraxia is a deficit in skilled movement that cannot be attributed to weakness, akinesia, abnormal tone or posture, movement disorders (e.g., tremor, chorea), deafferentation, intellectual deterioration, poor comprehension, or lack of cooperation (Koski, Iacoboni, & Mazziotta, 2002; Ochipa & Rothi, 2000). Signed language used in Deaf communities require the production and comprehension of skilled upper limb and body movements and thus are vulnerable to apraxic disturbances. Studies of deaf signers who have incurred left-hemisphere damage have reported instances of dissociation between linguistic manual actions and non-linguistic manual movements and pantomime (Corina et al 1999, Marshall 2004). Less well studied are cases where limb apraxia accompanies sign language disturbance. Here we ask how limb apraxia affects the form of sign language production errors seen in deaf aphasics. We analyzed data from 4 left hemisphere lesioned signers who show impaired performance on the Kimura test of limb apraxia and 3 subjects who exhibit sign language aphasia without marked limb apraxia. We coded each subject’s errors for compositional properties of ASL; handshape, path and internal movement, location and palm orientation, as well as assessments of sequential sign actions (handshape and movement transitions within and across signs). Our preliminary data indicate that while handshape substitutions were relatively common in all of the sign aphasics, signers with limb apraxia were particularly impaired in sequential movements of the hand postures. In addition movement trajectories (i.e. path movements) were more likely to be repeated and show evidence of successive articulatory approximation. The data are consistent with limb kinetic apraxia disorder in which fine movements of the hands and fingers are particularly vulnerable to impairment following left hemisphere parietal damage, but also point to the disturbance of spatial-temporal implementation of multi-joint limb movements (Poizner et al 1997).

Levels of representation during single word reading: Evidence from representation similarity analysis

Simon Fischer-Baum1, Emilio Tamez2, Donald Li3; 1Rice University, 2University of Pennslyvania, 3Johns Hopkins University

Multiple levels of representation are involved in reading words: visual representations of letter shape, orthographic representations of letter identity and order, phonological representations of the word’s pronunciation, and semantic representations of its meaning. Previous neuroimaging studies have identified a network of regions recruited during word reading, including ventral occipital-temporal regions. However, there is still uncertainty about what information is represented in these regions. In this study, we apply use a multivoxel pattern analysis technique for analyzing fMRI data – representational similarity analysis – to decode the type of information being represented in different brain regions when individuals read words. Consider how the word DOUGH relates to the words TOUGH, SEW and BREAD. DOUGH is related to TOUGH visually and orthographically, to SEW phonologically, and to BREAD semantically. Similarity among the patterns of neural response to different written words can be used to determine where in the brain each type of information is represented. Regions that respond similarly to DOUGH and TOUGH, but not to BREAD or SEW represent orthographic or visual information, while locations that respond similarly to DOUGH and BREAD, but not SEW or TOUGH contain representations of semantic information. Critical stimuli consistent of 35 written words, presented once per run over the course of twelve runs. Four theoretically predicted similarity matrices comparing each of the 35 words to every other word were computed based on theories of Visual, Orthographic, Phonological and Semantic representation. Twelve English-speaking participants were instructed read these words, pressing a button each time a proper names was presented, while a whole brain scans were acquired on a Siemens TRIO 3T scanner (Voxel size: 3.375×3.375×4mm; TR = 2.0secs). After pre-processing, a general linear model was applied to obtain a β-map for each of the thirty-five words. Using both whole-brain searchlight and anatomical defined regions of interest, brain-based similarity matrices were constructed to determine how similar the distributed patterns of neural activity for each word was to each other word. Strong correlations between brain-based similarity measures and the four theoretical predicted similarity matrices indicate the type of information represented in each region. Group level results of the searchlight analysis reveal multiple levels of representation associated with reading these words along the ventral occipital-temporal lobe. Similarity in the patterns of response to individual words in occipital regions correlates with the visual similarity matrix, but not the semantic, phonological or orthographic similarity matrices. Patterns of activity in portions of the anterior temporal lobe correlate only with the semantic similarity matrix while patterns of activity in the left midfusiform gyrus correlate only with orthographic similarity. This latter result is confirmed with the anatomical ROI analysis, which shows that the pattern of activity across the entire left fusiform gyrus correlates significantly with orthographic similarity, but not with other types of similarity. Taken together, these results provide unique insights into the neural instantiation of different levels of representation in written word processing, and can help adjudicate between competing hypotheses of the neural bases of reading.

ERP Effects for Prominence in Reference Resolution

James R. Monette1, John E. Drury1; 1Stony Brook University

[BACKGROUND] Previous ERP studies of anaphor processing have reported sustained anterior negativities (Nrefs) following anaphors in contexts with more than one potential antecedent (e.g., ‘‘Bruce told Al that HE…”; Nieuwland & Van Berkum 2006). More recently it has become clear that these situations of referential ambiguity may also give rise to P600-type effects, with the observed pattern (i.e., Nref, P600, or both) depending on both presence/absence and type of behavioral task as well as individual differences in working memory span (Nieuwland & van Berkum 2008; Nieuwland 2014). However, electrophysiological investigations of reference resolution have not pursued potential differences within their referentially unambiguous control conditions, namely whether or not the subject or object of the matrix clause is taken to be the single available referent (ex. “John told Sarah that he…” vs. “John told Sarah that she…”). These antecedent positions differ in locality to the referent (object position > subject position) and in relative prominence (subject position > objection position), both of which have been shown to influence reference resolution in behavioral studies (Foraker & MclElree 2007, Felser 2014). [STUDY] The present ERP reading/judgment study examined responses to pronouns in contexts with 2, 1, or 0 available antecedents. Additionally, we divided the cases with only one available referent [1Ref] based on whether the first or second NP served as the antecedent. For example: [2Ref] “Mary told Jane that SHE…” [1Ref-NP1] “Mary told John that SHE…” [1Ref-NP2] “John told Mary that SHE…” [0Ref] “Mike told John that SHE…”. Included also in this study were a range of violation types targeting, e.g., morpho-syntax (“could *walks…”), semantic anomaly (“ate the *cloud”), and logical-semantics/pragmatics (“There wasn’t *John in the room”). [METHODS] Sentence presentation was standard RSVP, followed by acceptability judgments on a 1-4 scale. ERPs were time-locked to pronouns and were examined for 1200 ms epochs (100 ms baseline). Individual reading span scores were acquired for each participant prior to testing. [CONCLUSION] Preliminary data (N=13) suggest both Nref and P600 effects for both [2Ref] and [0Ref] compared to the [1Ref-NP1] cases (consistent with Nieuwland 2014). Comparison between the [1Ref-NP2] and [1Ref-NP1] cases showed a broadly distributed negativity for the [1Ref-NP2] condition that was present over anterior electrodes from 400-900ms and posterior electrodes from 500-1000ms. The anterior portion of the effect differed significantly in amplitude and scalp distribution from Nref effects observed in our 2Ref condition, suggesting that this profile is unique from those elicited in response to referential ambiguity. Likewise, the posterior portion of the effect differed in timing and distribution from N400 effects elicited elsewhere in the study. We interpret these results as evidence for a cognitive bias towards selecting more prominent antecedents, taking the effects observed for the [1Ref-NP2] condition to index an extra cognitive burden for coindexing the pronoun with a less preferred antecedent. We situate our discussion within the framework of Content Addressable Memory (CAM), and make a case for connecting prominence with focal attention.



Slide Session B

Thursday, August 28, 8:30 - 9:50 am, Effectenbeurszaal

Language Evolution and Brain Structure

Chair: Sonja Kotz
Speakers: Mackenzie E. Fama, Lauren Covey, Fatemeh Mollaei

The effects of healthy aging and left hemisphere stroke on statistical language learning

Mackenzie E. Fama1, Katie D. Schuler1, Kate A. Spiegel1, Elizabeth H. Lacey1,2, Elissa L. Newport1, Peter E. Turkeltaub1,2; 1Georgetown University, 2MedStar National Rehabilitation Hospital

Sentences in spoken language are a continuous stream of sound, with no reliable acoustic cues marking word boundaries. To identify word boundaries, language learners use an implicit statistical learning mechanism that computes transitional probabilities between syllables. Infants, children, and young adults perform this type of statistical learning, without explicit instructions or feedback. Neuroimaging studies in healthy young adults have suggested that the left inferior frontal gyrus (IFG), left arcuate fasciculus, and bilateral caudate and putamen are involved in speech segmentation via statistical learning. Here we test whether this learning is disrupted by healthy aging or left hemisphere injury. Peñaloza et al. (2014) demonstrated some speech segmentation in individuals with left hemisphere injury, but only weak tests of learning were administered. Participants were 14 healthy college-aged adults (mean age 19.1, 6 male/8 female), 28 healthy older adults (mean age 57.8, 13 male/15 female), and 24 patients in the chronic phase of recovery from left-hemispheric stroke (mean age 59.9, 17 male/7 female). The artificial language (Saffran, Aslin & Newport 1996) uses 12 distinct syllables organized into 4 trisyllabic words, randomly ordered with equal frequencies for the words and their junctures, and concatenated into a continuous speech stream by a synthesizer with no acoustic cues to word boundaries. Participants listened for 10 minutes while performing a monitoring task to ensure they attended to the stream. After exposure, participants completed a 30-item post-test in which they heard a Word, Part-word (trisyllabic sequence that spanned a word boundary), or Non-word (3 familiar syllables in an unfamiliar sequence) and rated “How familiar does this sound?” from 1 (not at all) to 5 (very). Patients also completed a battery of language and cognitive tests. Young controls rated Words (mean rating: 4.17) > Part-words (3.60) > Non-words (2.60); in a within-group repeated measures ANOVA, the main effect of word type (F(2,26)=27.833, p<.001) and all possible pairwise comparisons were significant. Older controls also showed a significant main effect of word type: Words (3.62) > Part-words (3.38) > Non-words (2.79) with F(2,54)=13.885, p<.001; however, only pairwise comparisons to Non-words were significant (Word vs. Part-Word approached significance: p=.067). Patients’ mean ratings: Word (2.84), Part-word (3.01), and Non-word (2.84), did not show a significant main effect (F<1, p=.404) nor any significant pairwise comparisons. Between-group repeated measures ANOVAs show significant WordType*SubjectType interactions for younger vs. older controls (F(2,80)=3.77, p=.027) and older controls vs. patients (F(1.71,85.29)=7.19, p=.002). Preliminary voxel-based lesion symptom mapping analysis suggests that lesions in left IFG are associated with poorer ability to distinguish Words from Part-words. Only young controls showed the robust statistical learning required to distinguish Words from Part-words. Healthy older controls showed sequence recognition (distinguishing Words from Non-words) but somewhat reduced statistical learning. As a group, patients showed no learning, although some individual subjects performed better than others. These findings suggest that speech segmentation ability, like other implicit learning skills, declines during healthy aging. It is also sensitive to left hemisphere injury, specifically to IFG lesions, supporting prior evidence for the role of left IFG in statistical language learning.

An ERP investigation of the role of prediction and individual differences in semantic priming

Lauren Covey1, Caitlin Coughlin1, María Martínez-García1, Adrienne Johnson1, Xiao Yang1, Cynthia Siew1, Travis Major1, Robert Fiorentino1; 1University of Kansas

A number of ERP studies have shown N400 amplitude reductions as a function of contextual support; however, the extent to which this reflects prediction remains an issue (e.g., DeLong et al., 2005). Under the prediction account, N400 amplitude reduction is at least in part the result of predicting particular upcoming material, a process which requires attentional control and may show individual variation. Moreover, it has been argued that these predictive effects are not limited to sentence contexts, but also extend to word-pair semantic priming (Hutchison, 2007; Lau et al., 2013). In a behavioral study, Hutchison (2007) probed for prediction effects by presenting color-and-verbal cues indicating the likelihood of encountering a related prime-target pair prior to each trial (green text stating ‘80% Related’ or red text stating ‘80% Unrelated’). They found greater priming effects for the highly-related cue than for the highly-unrelated cue trials, an effect limited to individuals with high attentional control (measured by a composite score comprising operation span, Stroop, and antisaccade measures). Lau et al. (2013) manipulated predictive validity by constructing separate blocks with few related pairs and many related pairs, presented in that order; they found greater N400 reduction for related prime-target pairs for the high-relatedness than for the low-relatedness block. Although some studies have also found anterior positivities argued to reflect unfulfilled predictions (Van Petten & Luka, 2012), this effect was not found for the targets in Lau et al. (2013). The current study further investigates the role of prediction and individual differences in word-pair semantic priming using color-and-verbal relatedness-proportion cues (80% Related; 20% Related), following Hutchison (2007), and a battery of individual difference measures. N=17 native English-speaking adults completed the ERP study and a set of tasks assessing aspects of attentional control (Counting Span working memory task and Stroop task) and phonemic/semantic fluency (FAS task). In the ERP study, participants read 480 prime-target pairs, and were asked to press a button when an animal word appeared. The stimuli included 160 targets and 320 fillers used to ensure that the 80% and 20% cues accurately represented the relatedness-proportion in the experiment. Each target was paired with one of four primes: related prime with ‘80% Related’ cue, related prime with ‘20% Related’ cue, unrelated prime with ‘80% Related’ cue and unrelated prime with ‘20% Related’ cue. Results show an overall effect of relatedness: related targets yielded smaller N400s than unrelated targets. This effect was modulated by relatedness-proportion, with a greater N400 reduction effect for the ‘80% Related’ than for the ‘20% Related’ condition. An anterior positivity also emerged for the unrelated targets in the high-relatedness condition. This positivity, which was significantly larger in the ‘80% Unrelated’ than the ‘20% Unrelated’ condition in left anterior (and marginal in right anterior), may reflect the cost of disconfirmed predictions within the highly-related condition. Finally, accuracy on the Stroop task was significantly correlated with the relatedness effect in anterior regions. These findings converge with Lau et al. (2013) and Hutchison (2007) in demonstrating the role of prediction in word-pair semantic priming.

Monitoring of pitch and formant trajectories during speech in Parkinson’s disease

Fatemeh Mollaei1,2, Douglas M. Shiller1,3, Shari R. Baum1,2, Vincent L. Gracco1,2; 1Centre for Research on Brain, Language and Music, 2McGill University, 3Université de Montréal

The basal ganglia contribute to sensorimotor processing as well as higher order cognitive learning (Graybiel et al. 2005; Stocco et al. 2010). Parkinson’s disease (PD), a manifestation of basal ganglia dysfunction, is associated with a deficit in sensorimotor integration. We recently demonstrated differences in the degree of sensorimotor compensation and adaptation in response to auditory feedback alterations during speech in participants with PD compared to healthy controls (Mollaei et al., 2013; Mollaei et al, in preparation). Participants with PD were found to respond more robustly to auditory feedback manipulations of pitch (reflecting laryngeal changes) and less robustly to formant manipulations (reflecting changes in oral shape), suggesting that their sensorimotor systems are intrinsically sensitive to the feedback manipulations. One issue that has not been addressed is whether PD patients may be limited in their ability to detect these different auditory feedback induced errors while passively listening or compensating to their altered speech. Here we combined a sensorimotor compensation paradigm with an auditory- discrimination task to investigate error detection and correction mechanisms underlying the control of vocal pitch and formant parameters. PD and age-matched control participants produced speech while their auditory feedback (F0 and first formant frequency, or F1) was altered unexpectedly on random trials. After each trial, participants reported whether or not they detected the feedback perturbation. Participants also completed an auditory discrimination task using pre-recorded samples of their own speech with the same alterations applied. PD participants exhibited a larger compensatory response to F0 perturbations in pitch; however, they showed reduced compensation to F1 perturbations compared to age-matched controls. Furthermore, while detection accuracy for F1 did not differ between the two groups during on-line speech production, PD patients were found to be less sensitive to F1 errors during listening to pre-recorded speech. The results suggest that the sensory-based control of pitch and formant frequency may be differentially impaired in PD, due in part to differences in the capacity for auditory error detection in F0 and formant frequency.



Slide Session C

Friday, August 29, 8:30 - 9:50 am, Effectenbeurszaal

Combinatorial Processing: Syntax, Semantics, Pragmatics

Chair: Jeff Binder
Speakers: Edna Babbitt, Nicole E. Calma, Ece Kocagoncu, Cesar Lima

Improved Reading and Concurrent Increased BOLD Activation Following Intensive Aphasia Treatment

Edna Babbitt1,2,3, Xue Wang2, Todd Parrish2, Leora Cherney1,2; 1Rehabilitation Institute of Chicago, 2Feinberg School of Medicine, Northwestern University, 3University of Queensland

Intensive comprehensive aphasia programs (ICAP) provide up to 120 hours of therapy in four weeks, which differs from the standard outpatient model of therapy. Although research is beginning to explore behavioral changes in ICAP participants, little is known about concurrent neuroplastic changes that may occur. This poster highlights one participant who made significant behavioral language changes on a reading measure with corresponding increased BOLD activation on a semantic judgment fMRI task. Nine participants in a four-week ICAP agreed to take part in pre- and post-treatment fMRI scans. At the outset, one participant, SENMA, demonstrated remarkably decreased scores on the Western Aphasia Battery reading subtest as compared to his Aphasia Quotient (AQ) score (measure of comprehension and verbal expression). His intial WAB scores were: AQ=84.1 and reading=61.0. The other participants demonstrated a different pattern with reading scores an average of 9 points higher than the AQ scores. The participants performed a visual synonym task using a block design. There were 8 interleaved control and task blocks with a 40 second duration for each block. Pairs of words were presented. Participants were instructed to press the response button only when the words were synonymous (e.g. boat and ship). During the control period, pairs of fake words (letter strings) were presented. A response was required only when the letter strings were identical. The synonym task has been shown to activate Broca's and Wernicke's area in normal volunteers. Participants performed the task prior to and at the end of the ICAP treatment. MR data were collected using a 3.0 T Siemens scanner. Structural images were collected using a T1-weighted 3-D MPRAGE; functional images were collected using routine BOLD EPI. Functional images were slice timing corrected, realigned, co-registered with the structural image, normalized to the MNI template and smoothed by a 6mm Gaussian Kernel. Contrast images (synonyms>letter strings) were compared between pre- and post-treatment sessions. Behaviorally, SENMA improved in his WAB AQ from 84.1 to 93.0, an 8.9 point improvement; reading improved from 61 to 100 (maximum score), a 39 point improvement. The average WAB AQ and reading improvement of the other participants was 7.7 and 8.7 respectively. SENMA showed increased BOLD activations post-treatment in the left inferior frontal gyrus and supplementary motor area. None of the other subjects showed changes in BOLD activation. Changes in SENMA's reading scores from pre- to post-treatment demonstrated clinically significant improvement as compared to other participants. It is likely that participants with higher WAB AQ scores may have retained relatively good reading skills. However, the isolated deficits in SENMA's reading skills were ameliorated by participation in the intensive aphasia treatment program, which focused on all modalities of language. In summary, smaller gains on reading measures may not be represented by neuroplastic changes with an fMRI semantic judgment task. Significant behavioral improvements may need to occur before those changes are represented with scanning tasks.

Familiarity effects on Language/Music P600 interactions

Nicole E. Calma1, Laura Staum-Casasanto2, Dan Finer1, Robbin Miranda3, Michael T. Ullman4, John E. Drury1; 1Stony Brook University, 2University of Chicago, 3Infinimetrics Corporation, 4Georgetown University

[BACKGROUND] Whether language/music involve shared neurocognitive mechanisms remains a topic of debate (Patel 2003, Peretz & Coltheart 2003). Consistent with overlap in underlying mechanisms, ERP interference studies have demonstrated interaction patterns involving anterior negativities (LAN/RAN effects) when linguistic/musical syntax are simultaneously disrupted (using out-of-key notes or chords; see e.g., Koelsch et al 2005). However, whether the mechanisms underlying other ERP components (e.g., N400/P600) may be shared across domains remains undetermined. Our previous work tested familiar and unfamiliar melodies (from Miranda & Ullman 2007) containing musical syntactic violations (out-of-key notes) with simultaneous presentation of sentences containing lexical/conceptual semantic violations (“…the ball John will KICK/#BAKE…”). Such violations can elicit N400 effects followed by posterior P600-like positivities. In that study, P600 effects elicited by simultaneous music-syntactic and linguistic-semantic violations were subadditive when the melody was familiar/known (consistent with shared/overlapping generators). In contrast, unfamiliar/novel melodies containing simultaneous music-syntactic and linguistic-semantic violations yielded additive P600 effects (consistent with distinct underlying generators). [PRESENT STUDY] Using the same set of familiar/unfamiliar melodies and correct target sentences, the present study used musical syntactic violations (out-of-key notes) with simultaneous presentation of sentences containing linguistic-syntactic violations (e.g., “…the ball John will KICK/*KICKED…”). Sentences were presented word-by-word visually, while melodies were heard over headphones. Our aim was to determine whether the same influence of melody familiarity on language/music P600 interactions would arise with linguistic-syntactic violations. [RESULTS] Strikingly, the opposite pattern emerged, showing language/music P600 interactions for unfamiliar melodies only. Specifically, simultaneous violations of music-/linguistic-syntax with familiar melodies produced additive P600s (consistent with distinct underlying generators), while simultaneous violations of syntax across domains with unfamiliar melodies produced subadditive ERP profiles (consistent with shared/overlapping generators). [DISCUSSION] At a minimum, these results demonstrate that (1) music-related P600s for familiar and unfamiliar melodies are distinct, (2) linguistic P600 effects for conceptual-semantic violations and morphosyntactic violations are distinct. We suggest the P600 interaction effects for familiar melodies and lexical/conceptual linguistic violations arise when both streams are simultaneously reliant on mechanisms subserving access/retrieval of information from long term memory. The corresponding P600 interactions for unfamiliar melodies and linguistic morphosyntactic violations index simultaneous demands on the processing of abstract structure only. Independent of our interpretation of these data, however, the present findings demonstrate an important role for melody familiarity in understanding the relationships between the neurocognitive systems underlying language and music, and also demonstrate the utility of interference paradigms to distinguish superficially similar ERP response profiles ("semantic" and "syntactic" linguistic P600s) within domains.

From sound to meaning: Neural dynamics of lexical access to conceptual representations

Ece Kocagoncu1, Alex Clarke2, Barry Devereux1, Elisa Carrus1, Lorraine K. Tyler1; 1Centre for Speech, Language and the Brain, University of Cambridge, Cambridge, UK, 2Centre for Neuroscience, University of California, Davis, CA USA

How do we access meaning through speech? Understanding the meaning of a concept requires co-activation of concept’s features within a distributed semantic network. The distributed cohort model (DCM, Marslen-Wilson, 1987) of speech comprehension proposes that candidate lexical representations are activated in parallel as the speech unfolds. Parallel activation of candidate representations creates transient competition until the point in the spoken word where the word is uniquely identified (uniqueness point, UP). The model predicts that following the UP the partial activation of the target word’s representation is boosted and conceptual representations are accessed. Here we test this model by looking at how form-based representations activated by speech evolve into semantic representations following phonological and semantic competition. We adopt a distributed feature-based model of semantics, the Conceptual Structure Account (CSA; Tyler & Moss, 2001) and the DCM. We (1) investigate the spatiotemporal dynamics of phonological and semantic competition as the speech unfolds; (2) ask whether the UP marks a transition between competition and the activation of target word’s semantic representation; and (3) ask whether the target word’s semantic representation will prime its neighbours through spreading activation. We collected magnetoencephalography (MEG) data while fourteen participants listened to spoken words and performed a lexical decision task. Each of the 296 spoken words denoted a concrete concept (e.g. hammer, donkey). To define and segregate distinct spatiotemporal signatures associated with key cognitive processes that take place during spoken language comprehension, an innovative multivariate pattern analysis method called the spatiotemporal searchlight representational similarity analysis (ssRSA) was performed (Su, Fonteneau, Marslen-Wilson, & Kriegeskorte, 2012). ssRSA uncovers the representational geometry of specific oscillatory MEG signatures diffused over both cortical networks and time, and relates them to the representational geometry of theoretical models of cognition. Using ssRSA we tested four theoretical models that captured cohort competition, semantic competition, access to unique conceptual representations and shared category-level features. The ssRSA revealed early parallel activity in the L inferior frontal gyrus (LIFG) for models of phonological and semantic competition prior to the UP, supporting the view that LIFG resolves both phonological and semantic competition by selecting the target representation among competing alternatives (Moss et al., 2005; Novick et al., 2005). Resolution of both types of competition involved co-activation of LIFG aditionally with L supramarginal and L superior temporal gyri for phonological, and with the L angular gyrus (LAG) for semantic competition model. After the UP we found rapid access to unique conceptual features involving in LAG and R inferior frontal gyrus. Overall, results show that when conceptual representations are accessed through speech, concepts that match the auditory input will initially be partially activated. As soon as the pool of candidate concepts is narrowed down to a single concept, the unique conceptual features of that concept alone are rapidly accessed.

Feel the noise: Individual differences in perceived vividness of auditory imagery are reflected in human brain structure

Cesar Lima1, Nadine Lavan2, Samuel Evans1, Zarinah Agnew3, Andrea Halpern4, Pradheep Shanmugalingam1, Sophie Meekings1, Dana Boebinger1, Markus Ostarek1, Carolyn McGettigan2, Jane Warren5, Sophie Scott1; 1Institute of Cognitive Neuroscience, University College London, 2Department of Psychology, Royal Holloway University of London, 3Department of Otolaryngology, University of California, 4Department of Psychology, Bucknell University, 5Faculty of Brain Sciences, University College London, 6

Imagine the voice of a friend when you laugh together, or a piano playing a familiar song. We can generate mental auditory images of songs or voices, sometimes perceiving them almost as vividly as actual perceptual experiences. Although the functional networks supporting auditory imagery have been described in previous studies, less is known about the systems that predict inter-individual differences in auditory imagery. Combining voxel-based morphometry (VBM) and fMRI approaches, in this study we examined the structural basis of inter-individual differences in how auditory images are subjectively perceived, and explored links between auditory imagery, sensory-based auditory processing, and imagery in the visual domain. We found that higher vividness of auditory imagery correlated with increased grey matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus (VBM study, N = 74). An analysis of functional responses during the processing of different types of human vocalizations (fMRI study, N = 56) revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using a multivariate representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts higher perceived vividness of auditory imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. In a follow-up VBM study (N = 46), vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure. They additionally highlight a common role of perceptual-motor interactions for processing heard and internally generated auditory information.



Slide Session D

Friday, August 29, 1:00 – 2:20 pm, Effectenbeurszaal

Lexical Processing and Cognitive Control

Chair: Fred Dick
Speakers: Zude Zhu, Sanne ten Oever, Laura Hedlund, Drew Trotter

Age-related semantic prediction reduction was associated with smaller brain activation change

Zude Zhu1, Shiwen Feng1; 1Jiangsu Normal University

During sentence comprehension, older adults are less likely than younger adults to predict upcoming words based on given sentence context. However, it remains unclear what is the relationship between the prediction change and brain function aging. In the present study, 41 healthy native Chinese speakers (23-70 years old) comprehended low cloze (LC) and high cloze (HC) sentences during fMRI scanning. While there were no significant age-related behavioral changes, after controlling education and sex, age-related reduction during semantic prediction (LC - HC) was found in regions including left middle frontal gyrus, left supmarginal gyrus, bilateral temporal-occipital cortex and supplemental motor cortex. It was further shown that, smaller prediction related activation change in anterior portion of left middle temporal gyrus was associated with better categorical fluency, after controlling age, education and sex. Moreover, RT interference in the Stroop task was negatively associated with prediction effect in posterior portion of left middle frontal gyrus, right middle temporal gyrus and right visual cortex. In together, the results suggest that semantic prediction was correlated with age and cognitive control change, and are in line with the notion that language comprehension mechanisms are integrated with language production mechanisms.

Theta phase sensitization as a flexible neural mechanism for optimized syllable identification

Sanne ten Oever1, Alexander Sack1; 1Maastricht University

In spoken language, visual mouth movements naturally precede the production of any speech sound and therefore serve as a temporal prediction and detection cue for identifying spoken language. It has been proposed that at the onset of visual mouth movements ongoing theta oscillations in auditory cortex align, providing the temporal reference frame for the auditory processing of subsequent speech sounds. As different syllables (e.g. /da/ and /ga/) are characterized by different visual-to-auditory temporal asynchronies, auditory signals should consequently be assigned to different phases of the aligned theta oscillation. This results in a consistent relation between syllable identity and theta phase. In the current study we tested whether this “phase-syllable identity” relation causes theta phase to systematically bias syllable perception when being confronted with ambiguous auditory stimuli in the absence of visual cues. To this end, we recorded EEG while presenting ambiguous auditory /daga/ syllables to investigate whether ongoing theta oscillation phase prior to stimulus onset biases syllable identification. In a second experiment, we externally entrained theta oscillations via rhythmic auditory stimulation - thereby controlling at which exact theta phase the ambiguous /daga/ stimulus was presented. In both experiments we revealed that participants perceive /da/ or /ga/ dependent on the underlying theta oscillation phase, establishing the functional relationship between pre-stimulus theta phase and syllable identification.

Parsing in the monolingual and bilingual brain: ERP evidence of automatic simultaneous access to morphosyntactic information in L1 and L2

Laura Hedlund1, Alina Leminen1,2, Lilli Kimppa1, Teija Kujala1, Yury Shtyrov2,3; 1Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland, 2Center of Functionally Integrative Neuroscience, Aarhus University, Denmark, 3Centre for Cognition and Decision Making, Higher School of Economics, Moscow, Russia

In today’s world, multilingualism is the norm rather than the exception. The human ability to understand and speak more than one language has therefore become an important topic of investigation in cognitive neuroscience. A key issue in mastering any language is acquiring grammatical and morphosyntactic rules in order to understand and produce discourse correctly. Previous studies suggest that native speakers possess automatic access to memory traces of morphosyntactic elements (Shtyrov et al., 2003, JOCN; Bakker et al., 2013, NeuroImage; Leminen et al., 2013, Cortex), i.e. morphosyntactic structures are rapidly processed in the human neocortex even without listeners’ focused attention. It remains unknown, however, whether automatic neural morphosyntactic mechanisms work in a similar way in native speakers and highly proficient bilinguals. This was investigated here in (1) a group of sequential Finnish-English bilinguals (L1 Finnish speakers who started learning English before the age of nine, and use it in their daily lives), and (2) monolingual speakers of English. The two adult groups (aged 18-40) were presented with an acoustically balanced set of Finnish and English words consisting of (1) real inflected words with plural suffixes ’–s’ (English) and ’–t’ (Finnish) e.g., cakes, kanat (‘chickens’); (2) novel complex words consisting of real stems and combined with suffixes from the opposite languages (e.g., cake-*t, kana-*s), and (3) as a control, novel complex words consisting of phonologically similar pseudoword stems and real suffixes of both languages (*pake-t, *pana-s, *pake-s, *pana-t). We recorded high resolution EEG in a passive listening paradigm. The ERP pattern in the monolingual group showed a stronger activation for real English inflected words with the corresponding inflectional suffix (e.g., cakes) than for Finnish inflected words. This corroborates earlier findings with similar experimental paradigms and suggests automatic access to lexical and grammatical units in L1. Similar, although smaller responses were found in monolinguals for English pseudo-stems with plural marker ‘-s’, e.g., *pakes. Crucially, this pattern was not found in monolinguals for Finnish plural suffix attached to English stems. This suggests that inflectional parsing takes place for both real words and unknown complex pseudowords (although to a different degree) as long as they contain suffixes that conform to the native language morphological patterns. In contrast, bilingual speakers exhibited an unbiased ERP response towards complex words in both languages, indicating a similar skill level of morphological parsing in both L1 and L2. These responses were characterized by similar ERP amplitudes and response patterns across the different morphosyntactic conditions across languages. These results suggest that bilingual learners are capable of automatically accessing morphosyntactic information in both languages interchangeably and exhibit an overlap of morphological processing strategies, whether it be in L1 or L2. These results are in line with a view on bilingual grammatical processing where children who have later exposure of L2 input (even up to the age of 9) may process grammatical information like native speakers (e.g., Hernandez et al., 2005, TrendsCog.Sci.).

Vikings who can gulp down beer mugs, cook bean cans, and slurp wine glasses: An ERP study of ambiguous heads in complex Icelandic words

Drew Trotter1, Karthik Durvasula1, þórhalla Guðmundsdóttir Beck2, Matthew Whelpton2, Joan Maling3, Alan Beretta1; 1Michigan State University, 2University of Iceland, 3Brandeis University

Semantic and syntactic heads in transparent noun-noun compounds are always aligned (Scalise & Guevara, 2006). Icelandic seems to provides a striking exception: It is possible to say, ‘Ég þynnti þrjá kaffibolla’ (‘I diluted three coffee-cups’) where the numeral agrees in number and gender with the syntactic head ‘cups’, but ‘coffee’ is the semantic head (it is what I drank). Also, in Icelandic, it is possible to ‘break three coffee-cups’; here, the semantic and syntactic heads are aligned in the normal way at ‘cups’. Thus, compounds with container nouns (cup, can, bowl, etc.) are ambiguous depending on the choice of verb. Harley (2015) proposes two structures: (i) where a coffee-cup is broken, ‘cup’ is both the syntactic and semantic head of a noun-phrase, and ‘coffee’ is a modifier (‘Aligned-Head’); (ii) where a coffee-cup is diluted, ‘cup’ is the head of a measure-phrase, and ‘coffee’ is the head of a noun-phrase (‘Split-Head’). Since heads would have to be processed separately to distinguish the two interpretations of ‘coffee-cup’, this contrast speaks directly to the issue of how complex words are accessed, requiring that they be decomposed into component parts. We conducted two ERP studies using RSVP to examine the processing of split-heads compared to a baseline of aligned-heads. Experiment 1: 22 native-speakers of Icelandic read 37 sentence-pairs involving either split- or aligned-heads. ERPs time-locked to the onset of the compound were analyzed using repeated-measures ANOVA with Condition (split, aligned) and ROI (Anteriority, 2 levels; laterality, 2 levels) as factors. Results revealed a significantly greater anterior-left positivity for split-heads during the 450-600ms time-window. We interpret this as a cost of processing the more complex head-structure inherent in split-heads. However, it could be due merely to the semantic implausibility of C1 as the head in aligned-heads (‘coffee’ cannot be ‘broken’) (Staub et al. 2007). Hence, Experiment 2: 20 (new) Icelandic subjects took part. All stimuli were identical to Experiment 1, except that C1/C2 were presented separately. Thus, the only difference at C1 was the semantic anomaly in aligned-heads. No ERP effect was found at C1, suggesting that the ERP finding in Experiment 1 was indeed due to the processing of a complex-head structure. Because in all compounds, the first constituent (C1) could not stand alone (it mismatched a preceding numeral in gender or number), so a second constituent (C2) was entirely predictable in both conditions. The prediction would be for an aligned-head since that is typical. At C2, we found an early anterior-left negativity (125-225ms) for split-head, which likely reflects surprise at encountering a measure-phrase. This is followed by a posterior-right positivity (275-350ms) that may constitute a P3a, associated with evaluating the consequences of surprise, i.e., with reanalysis from an expected aligned-head to a split-head structure. More generally, we conclude that split-head compounds are decomposed into noun-phrase and measure-phrase heads. More significantly, aligned-head compounds must also be decomposed into modifier and head to render them distinguishable from split-head compounds, supporting decomposition models of complex-word processing (Taft, 2004; Fiorentino & Poeppel, 2007).