Slide Sessions

Session #

Session Title

Time & Location

Session A

Network Development and Reorganization

Thursday, October 15, 1:00 - 2:20 pm
Grand Ballroom

Session B

Perspectives on Language Processing

Friday, October 16, 3:00 - 4:20 pm
Grand Ballroom

Session C

Outside the Left Peri-Sylvian Cortex

Saturday, October 17, 8:30 - 9:50 am
Grand Ballroom



Slide Session A

Thursday, October 15, 1:00 - 2:20 pm, Grand Ballroom

Network Development and Reorganization

Chair: Gina Kuperberg, Tufts University
Speakers: Ekaterini Klepousniotou, Carla Fernandez, Camila Zugarramurdi, I-Fan Su

Processing enhancement for conventional metaphors following stimulation of Broca’s area

Ekaterini Klepousniotou1, Eleanor Boardman1, Alison Allsopp1, Daniel J Martindale1; 1University of Leeds

The left hemisphere of the brain is specialised and dominant for language comprehension and production and patients with left hemisphere damage often display profound language disruption. In contrast, following right hemisphere damage, disruption to language is less perceptible. Current research acknowledges a critical role for the right hemisphere in processing inferred or implied information by maintaining relevant facts and/or suppressing irrelevant ones but the exact role of the right hemisphere and its coordination with the left is still under investigation. The present study investigated the role of Broca’s area in the left hemisphere and its homologue in the right hemisphere in the processing of metaphorical language by studying the processing abilities of individuals with depressed or enhanced unilateral brain function produced by transcranial direct current stimulation (tDCS). The study employed an auditory sentence priming paradigm using both novel and conventional metaphors as well as literal sentences, and young healthy participants (N=20) were asked to make semantic judgements. Anodal, cathodal or sham stimulation was applied to electrodes F7/F8 known to tap onto Broca’s area and its homologue in the right hemisphere respectively. Significantly enhanced processing abilities for literal meanings and conventional metaphors only were observed after anodal (i.e., excitatory) stimulation of Broca’s area in the left hemisphere, while anodal stimulation of Broca’s homologue in the right hemisphere resulted in enhancing accuracy across all conditions. The findings are in line with the Fine/Coarse Semantic Coding Hypothesis and corroborate previous research in underlining the distinct roles the left and right hemispheres play in processing literal and non-literal language respectively.

Cost modulation by accented speech in code-switching processing: an ERP and neural oscillations study.

Carla Fernandez; 1

hhjjj

Using brain rhythms to improve behavioral predictors of reading

Camila Zugarramurdi1,2, Marie Lallier1, Juan C. Valle-Lisboa2, Manuel Carreiras1; 1Basque Center on Cognition Brain and Language (BCBL), 2Facultad de Psicologia, Universidad de la Republica

Predicting reading development is a crucial step towards designing timely interventions to prevent life-long consequences of reading difficulties. In the current literature, there’s general agreement on three behavioral predictors of reading development, irrespective of the language of study: phonological awareness, letter knowledge and rapid automatized naming (RAN). However, these combined measures account for up to 60 percent of the variance and have a false alarm rate of at least 10 percent , which potentially results in superfluous interventions that are undesirable in terms of both human and economic resources. Although in recent years new theories into the underlying mechanisms of reading difficulties have been put forward, these have not make its way into the behavioral assessment of reading development. The main claim of these theories is that the precision in the entrainment of oscillatory neural activity to external rhythmic stimuli, such as speech or words, underlies the distinctiveness of phonological representations at the auditory level, and the precise shifting of attention necessary for reading at the visual level. In the present study we aimed at improving the predictive validity of behavioral measures by including novel tasks that tap into evaluating the precision of synchronization in auditory and visual oscillatory activity. The behavioral assessment included: phonological awareness, letter knowledge, RAN, tapping to a beat, dichotic listening, visual entrainment, verbal and non-verbal short-term memory, receptive vocabulary, IQ and reading (decoding). The sample was composed of ~700 Spanish-speaking 5 y.o. prereaders attending kindergarten who were assessed at their schools in 3 sessions distributed in two successive weeks; the overall data collection was completed over a 2 month period. In order to accomplish such a large-scale assessment in a brief time course, a digital screening tool implemented in tablets in a game-like manner was developed. These data collection entails the first phase of a longitudinal study to be completed by the end of 2017. The results at this phase suggest that precision of entrainment to external rhythmic stimuli, behaviorally measured, can explain some of the variance found in phonological awareness tasks, underscoring its role in the specification of phonological representations and in the rapid allocation of attention that underlie reading performance. Furthermore, the results show that neural entrainment can be indirectly measured through behavioral tasks easily implemented in an educational environment, which can lead to improved prediction of early reading difficulties.

Morphological Processing in Chinese-Speaking Children: An Event-Related Potential Study.

I-Fan Su1, Anna Petrova1, Wei Yan Renee Fung1, Kai Yan Dustin Lau2, Sam Po Law1, Hei Ling Ho1; 1The University of Hong Kong, 2The Hong Kong Polytechnic University

Morphological awareness has been suggested to play a significant role in reading development, and support various skills such as vocabulary and text comprehension. However, evidence on how morphological information is processed in Chinese is limited and has been constrained to behavioural tasks in children with reading difficulties (e.g. Chen et al., 2009; Tong et al., 2009). Using event-related potential (ERP) technique, this study examined whether typically developing children are sensitive to the processing of morphemes during lexical-semantic processing. Children were administered a standardized reading test (Hong Kong Graded Character Naming Test, Leung, Lai & Kwan, 2008), and asked to decide whether a target character corresponds to a syllable in the preceding spoken word in a homophone verification task. The auditorially-presented word and target character pairs varied in congruency (match vs. mismatch) and orthographic similarity (orthographically similar vs. dissimilar to the target character) factorially. Significant orthographic effects were found behaviorally, whereby children responded faster and more accurately to characters paired with visually dissimilar targets, suggesting that they needed more time and were more error prone to reject the target character amongst visually similar competitors. Electrophysiological results at the N400 component showed a left lateralization effect, and that characters paired with mismatched morphemes elicited a more negative N400 than matched morphemes. Character reading ability was also positively correlated with the amplitude of the N400 morpheme congruency effect in the central region. Importantly, the N400 component is sensitive to morphological processing in children, as more effortful N400 activation was required to inhibit the unrelated target morphemes during lexical-semantic retrieval. Moreover, the findings suggest the potential of applying the homophone verification paradigm, given its additional sensitivity to reading ability, to study morphological processing in less skilled and poor readers.



Slide Session B

Friday, October 16, 3:00 - 4:20 pm, Grand Ballroom

Perspectives on Language Processing

Chair: Liina Pylkkänen, New York University
Speakers: Nadine Lavan, Jeremy I Skipper, Sophia van Hees, Anna Fiona Weiss

Speaker identity in non-verbal signals – variability impairs generalization

Nadine Lavan1, Lucia Garrido2, Carolyn McGettigan1,3; 1Royal Holloway, University of London, 2Brunel University, London, 3University College London

Introduction Voices are uniquely flexible signals that convey a wealth of information about a speaker’s identity (Belin et al., 2004; Mathias & von Kriegstein, 2014). Most research looking at the extraction of identity-related information from voices has used speech signals, produced in a neutral voice. We thus know little about how natural flexibility in vocal signals affects identity perception. The extraction of speaker information is thought to rely on prototypical voice representations (Kreiman & Sidtis, 2011; Latinus et al., 2013). According to this model, unfamiliar voices are compared to prototypical representations based on population averages, while familiar voices are matched to representations of the specific speaker’s vocal inventory. These proposed mechanisms predict differences in voice perception between unfamiliar and familiar listeners. In a behavioural study and an fMRI, we explored how identity-related information in voices is processed in familiar and unfamiliar listeners. Going beyond speech signals, explored how vocal flexibility and variability, introduced by different types of non-verbal vocalizations (laughter, vowels) and levels of volitional control during production (volitional laughter vs. spontaneous laughter) affects identity processing. Methods and Results Behavioural Study 23 familiar and 23 unfamiliar listeners performed a speaker discrimination task. Participants heard permutations of pairs of Volitional Laughter, Spontaneous Laughter and Vowels produced by 6 speakers. This yielded 6 conditions: 4 within-vocalization conditions (Vowels-Vowels, Volitional Laughter-Volitional Laughter, Spontaneous Laughter-Spontaneous Laughter, Volitional Laughter-Spontaneous Laughter), and 2 across-vocalization conditions (Volitional Laughter-Vowels, Spontaneous Laughter-Vowels). Participants were significantly better at matching the speakers in within-vocalisation pairings (e.g. Vowels–Vowels) than for across vocalizations pairings (e.g. Volitional Laughter–Vowels). Familiar listeners performed significantly better than unfamiliar listeners in each condition. Both groups were, however, equally affected by flexibility in vocal signals: there was an apparent failure to generalize identity information across different nonverbal vocalization types. fMRI study 19 familiar listeners and 20 unfamiliar listeners were presented with spontaneous laughter, volitional laughter, series of vowels and brief sentences produced by 6 speakers while performing a one-back speaker discrimination task. Univariate analyses showed a main effect of vocalization type in a widespread network including bilateral STG, IFG and superior medial gyri, mapping differences in acoustic properties and meaning. A main effect of speaker was found in bilateral STG, reflecting acoustic differences. A main effect of group was found in right superior medial gyrus, with activation being higher for unfamiliar listeners, indicating more demanding computations during voice perception (von Kriegstein & Giraud, 2004). Representational similarity analysis (RSA) was used to explore whether regions coding for vocalization type and speaker-identity could be identified. In line with univariate results, analyses reveal vocalization type-based coding in bilateral STG. Ongoing analyses are attempting to identify regions that code for speaker identity, and to explore potential group differences within those regions. Conclusion Our findings illustrate that our ability to generalize speaker identity across different kinds of vocal signals is limited, even when dealing with familiar voices. In line with behavioural evidence, identifying neural underpinnings ‘categorical’ speaker identity for familiar voices may be challenging in the context of highly variable vocal signals.

Jeremy I Skipper; 1

Training-induced changes in the neural mechanisms underlying visual word recognition

Sophia van Hees1,2, Penny Pexman1,2, Bijan Mohammed1, Sage Brown1, Andrea Protzner1,2; 1University of Calgary, 2Hotchkiss Brain Institute

Introduction: Previous studies suggest that the efficiency of the visual word recognition system can be improved with training. The current study examined the underlying mechanism responsible for such enhancements in visual word processing. Thus, we employed EEG before and after intensive short-term training using a visual lexical decision task (LDT). We additionally investigated whether changes in processing from the trained task condition transferred to an untrained task condition. Methods: 20 healthy young adults (20-28yrs; 11 males) completed approximately 16 hours of training over 7-10 days. The training employed a visual LDT with word and nonword stimuli presented horizontally. Before and after training the participants completed the LDT during EEG recording, with stimuli presented both horizontally and vertically. Behaviourally, we analysed training-induced changes in reaction times on the trained (horizontal word processing) and the near-transfer (vertical word processing) task conditions, using repeated measures ANOVAs. To examine neural changes, we performed Partial Least Squares (PLS) analysis on the ERP data for both the trained and near-transfer task conditions. Results: Behaviourally, participants were significantly faster at correctly responding to both horizontal and vertical words following training. Analysis of the ERP waveforms revealed greater negative amplitudes in the N170 component following training, as well as reduced positive amplitudes in the P600 component. These amplitude changes were identified in bilateral occipital-parietal electrodes, for both horizontal and vertical words. Discussion: The results suggest that LDT training improved the efficiency of the visual word recognition system, for both the trained task condition (horizontal word processing) and the near-transfer task condition (vertical word processing). Greater amplitudes in the N170 component suggest that LDT training improved visual processing of letter strings, in line with previous studies examining reading skill. In contrast, decreased amplitudes in the P600 component suggest reduced access to stored representations of words. Taken together, the results suggest that participants relied more on perceptual processing of the stimuli, and less upon the language network following training. Furthermore, this mechanism was also engaged when stimuli were presented in the untrained vertical condition, providing some evidence of near-transfer.

What does it mean to regress? The neural basis of regressive eye movements during reading as revealed by concurrent fMRI/eye-tracking measures

Anna Fiona Weiss1, Franziska Kretzschmar2, Arne Nagels1,3, Matthias Schlesewsky4, Ina Bornkessel-Schlesewsky4; 1Department of Germanic Linguistics, University of Marburg, Marburg, Germany, 2Department of English and Linguistics, Johannes Gutenberg University of Mainz, Mainz, Germany, 3Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany, 4School of Psychology, Social Work & Social Policy, University of South Australia, Adelaide, Australia

Comprehension difficulty in sentence reading has been linked to two behavioral parameters: increased fixation durations and a higher probability of regressive saccades. Although inter-word regressions occur frequently, their function has not been clearly determined. While some have argued for the combination of visuo-motor control and attentional re-orientation with linguistic processing, others have posited direct linguistic control over the execution of regressions (Mitchell et al. 2008; Frazier & Rayner 1982). Yet, behavioral measures cannot unequivocally distinguish between these competing explanations and with only one study reporting the modulation of neuronal correlates by fixation durations (Henderson et al. 2015), the functional significance and neural basis of regressive saccades in reading remains largely unknown. We conducted a combined eye-tracking/fMRI study that investigated the neural underpinnings of regressions, and the extent to which progressive and regressive saccades modulate brain activation in the reading network. We were particularly interested in brain regions associated with visuo-spatial attention (e.g temporo-parietal regions), visuo-motor control (e.g. cerebellum, frontal eye fields) and sentence processing (e.g. left superior and middle temporal regions, left IFG). Twenty-three monolingual native speakers of German read 216 sentences of varying structures, including semantically anomalous and non-anomalous material and sentences with target words that differed in lexical frequency and predictability. Based on the eye-movement data, every progressive and regressive saccadic eye movement was identified and temporally correlated to the fMRI signal. On the subject level, every saccade onset was modeled as a critical event, either as a progressive or as a regressive saccade. On the group level, an event-related design was used to compare progressive versus regressive saccades against a resting baseline. First preliminary results show significant activation differences between the two types of saccadic eye movements. Progressive saccades reveal enhanced neural responses only in the left superior occipital lobe. In contrast, regressive eye movements show widely distributed activation patterns within a fronto-parieto-temporal network (e.g. left IFG, MTG, frontal eye fields). Importantly, this pattern seems not be driven by anomalous sentences alone. These findings suggest that inter-word regressions appear to be driven jointly by linguistic and oculomotor processes – as evidenced by the global activation pattern – whereas progressive saccades seem to mainly reflect automatic visuo-motor processing used to provide bottom-up information to proceed through the sentence. This supports previous findings that regressions follow attentional shifts in the perceptual span in reading (Apel et al. 2012) and the view that the eyes automatically scan the sentence with progressive saccades, while decisions regarding re-reading are guided by higher-order language comprehension and attentional re-orientation in oculomotor control.



Slide Session C

Saturday, October 17, 8:30 - 9:50 am, Grand Ballroom

Outside the Left Peri-Sylvian Cortex

Chair: Kate Watkins, University of Oxford
Speakers: Einat Liebenthal, Lars Meyer, Jessica L. Mow, Lotte Schoot

Abnormal semantic processing of emotional and neutral words in post traumatic stress disorder

Einat Liebenthal1, Hong Pan1, Swathi Iyer1, Monica Bennett1, Benjamin Coiner1, Daniel Weisholtz1, David Silbersweig1, Emily Stern1; 1Brigham and Women's Hospital, Harvard Medical School

Post traumatic stress disorder (PTSD) is associated with dysfunction of fronto-limbic brain circuits involved in emotional memory and the control of behavioral manifestations of fear. Previous neuroimaging work has demonstrated a relatively elevated and sustained-over-time response in the left amygdala to trauma-related words in subjects with PTSD, and a positive correlation between the level of the amygdala response and PTSD symptom severity (Protopopescu et al., Biological Psychiatry 2005; 57:464-473). The present study focused on the role of the semantic system in mediating the processing of emotional words in subjects with PTSD. Methods: Participants were patients with a primary diagnosis of sexual/physical assault PTSD (N=29; mean age (SD) =35 (9), 25 females) and normal (NL) subjects (N=23; mean age (SD) =29 (8), 11 females). Whole brain blood oxygen level dependent functional magnetic resonance imaging (fMRI) responses were compared in the PTSD and NL subjects during silent reading of words. The words consisted of 24 negative trauma-related (e.g., rape, force), 24 negative non-trauma-related (e.g., cancer, frantic), 48 neutral (e.g., bookcase, rotate), and 48 positive (e.g., gentle, delighted) words, balanced across valence categories for frequency, length, and part of speech. The fMRI activity was compared in the early and late epochs of the scan to assess sensitization and habituation. In the PTSD patients, the fMRI activity was also correlated with symptom severity measured by the Clinician Administered PTSD Scale (CAPS). Results: In a post scan behavioral test, the PTSD and NL subjects rated the trauma words as more negative than the neutral words, and the PTSD subjects rated the trauma words as more negative than the non-trauma words (p < .001). The PTSD compared to NL subjects showed a different fMRI pattern in left temporal and left inferior parietal language areas in response to both the trauma and neutral words. In the angular gyrus (AG), fusiform gyrus (FG), and rostral anterior cingulate gyrus (AC), there was reduced deactivation to the trauma words. In the posterior middle temporal gyrus (pMTG), there was elevated activation to the trauma words in the late epoch (i.e., there was less habituation). In the pMTG and AC, the response was also elevated to the neutral words. In PTSD subjects, the level of activation to trauma versus neutral words was positively correlated with PTSD symptom severity in the pMTG in the late epoch, and negatively correlated with AC. Conclusions: The results demonstrate a different time course of habituation and different specificity of left semantic areas in PTSD relative to NL subjects. The areas showing reduced deactivation to the trauma words in PTSD (AG, FG, AC) are part of the default network, in which deactivations have been associated with spontaneous (task-unrelated) cognitive and semantic processing. The left pMTG showing elevated activation in PTSD to both trauma and neutral words is part of the semantic network associated with supramodal integration and concept retrieval. Overall, these results suggest that semantic processing of both trauma-related negative words and neutral words is impaired in PTSD, as reflected by more extensive engagement of the semantic system.

Syntactic Bias Overrides Speech Acoustics via Delta-Band Oscillatory Phase

Lars Meyer1, Molly J. Henry2, Noura Schmuck3, Phoebe Gaston4, Angela D. Friederici1; 1Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04303 Leipzig, Germany, 2Brain and Mind Institute, University of Western Ontario, Ontario, Canada N6G 1H1, 3Department of English and Linguistics, Johannes Gutenberg University, 55128 Mainz, Germany, 4Department of Linguistics, University of Maryland, College Park, MD, 20742-7505, USA

Language comprehension requires that single words be grouped into syntactic phrases, because the sheer number of words often exceeds working-memory capacity. In speech, syntactic phrases are delimited by prosodic boundaries, mostly aligning syntactic and prosodic grouping patterns. However, sentences frequently allow for two alternative grouping patterns. In such cases, comprehenders may internally form a syntactic phrase that contradicts the acoustic boundary cues of speech prosody. The crucial question is whether language comprehension proceeds in a clear bottom-up manner with primacy on acoustic cues, or whether top-down syntactic information is dominant enough to override speech acoustics. While delta-band oscillations are known to track speech prosody, we hypothesized here that internal grouping bias can override prosody tracking in ambiguous situations, which should be evident in delta-band oscillations when comprehenders choose grouping patterns different from those indicated by speech prosody. Our auditory electroencephalography study employed ambiguous sentence materials, the interpretation of which depended on whether an identical word was either followed by a prosodic boundary or not, thereby signaling the ending or continuation of the current phrase. Delta-band oscillatory phase at the critical word should reflect whether participants terminate a phrase despite a lack of acoustic boundary cues. The factorial analysis of delta-band oscillatory phase, crossing speech prosody with participants’ grouping choice, revealed a main effect for grouping choice—independent of speech prosody. Furthermore, participants showed a reduced delta-band entrainment to speech prosody when sentence interpretation followed their internal grouping bias. Source localization suggests brain regions involved in linguistic combinatorics and pitch perception to underlie the effect. The results indicate that an internal linguistic bias for grouping words into syntactic phrases can override speech prosody via delta-band oscillatory phase at the neural level.

Encoding and Organization of Phonemes by Feature in STG

Jessica L. Mow1, Laura E. Gwilliams2, Bahar Khalighinejad3, Nima Mesgarani3, Alec Marantz1,2; 1New York University Abu Dhabi, 2New York University, 3Columbia University

Recent studies have found evidence for neurons within the superior temporal gyrus (STG) encoding groups of phonemes by phonetic feature (e.g. manner, location, etc.). Though the exact spatial distribution of phonetic feature detectors was unique to each subject, the organization of neurons relative to phonetic similarity was consistent (Mesgarani et al., Science 343:1006-10, 2014; Chang et al., Nat Neurosci 13:1428-32, 2010). These studies examined high gamma responses from cortical-surface electrodes with electrocorticography (ECoG) and lower frequency, theta and gamma band responses from scalp electrodes using electroencephalography (EEG) (Di Liberto et al., Curr Biol, 25:2457-65, 2015). In the present study, we aimed to replicate the previous findings using magnetoencephalography (MEG), analyzing both sensors and (reconstructed) sources. Can we find sensors and sources responding at high gamma frequency at around 100 ms post phoneme onset that are sensitive to distinctive features, and can we replicate the theta-band sensitivity on sensors in the same time window observed in EEG? Three subjects were presented with 498 sentences from the TIMIT corpus (Garofolo et al., Linguistic Data Consortium, 1993), identical to those used in Mesgarani et al. (2014). Each sentence was followed by a word memory task. Data were noise-reduced, downsampled to 500 Hz and z-scored before being processed with MNE-Python. Overlapping analysis epochs were defined relative to the onset of each phoneme, resulting in a matrix of responses to each phoneme for each subject, at each millisecond, for each sensor or reconstructed source for each frequency band under analysis. Data are being processed through an adapted version of the script used in Mesgarani et al. (2014) to compute the phoneme selectivity index (PSI) for sensors as well as for reconstructed sources in the superior temporal areas in each subject. The analysis will be repeated for the high gamma (75-150Hz) band used in Mesgarani et al. (2014) as well as for the theta and gamma bands identified by Di Liberto et al. (2015). In addition, we will replicate the analysis of Di Liberto et al. (2015) by computing the relation between the feature and spectrogram representation of speech sounds (FS) and the evoked MEG response at each sensor and at each of our reconstructed sources in STG, creating a multivariate temporal response function (mTRF). The correlation between the predicted MEG signal given the FS and the actual signal provides an index of sensor/source sensitivity to phoneme identity at particular time windows and can test for the localization of phoneme-sensitive sources in STG at 100 ms+ post-phoneme onset. The results will help determine the feasibility of using MEG to analyze high gamma responses, linking MEG to ECoG and the organization of spike-trains in clusters of neurons. The spatial resolution of MEG as compared to ECoG can be assessed for a known cortical response.

Finding your way in the zoo: how situation model alignment affects interpersonal neural coupling

Lotte Schoot1, Arjen Stolk3,2, Peter Hagoort1,2, Simon Garrod4, Katrien Segaert5,1, Laura Menenti4,1; 1Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands, 2Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands, 3Knight Lab, University of California, Berkeley, US, 4Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK, 5School of Psychology, University of Birmingham, Birmingham, UK, 6

INTRODUCTION: We investigated how speaker-listener alignment at the level of the situation model is reflected in inter-subject correlations in temporal and spatial patterns of brain activity, also known as between-brain neural coupling (Stephens et al., 2010). We manipulated the complexity of the situation models that needed to be communicated (simple vs complex situation model) to investigate whether this affects neural coupling between speaker and listener. Furthermore, we investigated whether the degree to which alignment was successful was positively related to the degree of between-brain coupling. METHOD: We measured neural coupling (using fMRI) between speakers describing abstract zoo maps, and listeners interpreting those descriptions. Each speaker described both a ‘simple’ map, a 6x6 grid including five animal locations, and a ‘complex’ map, an 8x8 grid including 7 animal locations, from memory, and with the order of map description randomized across speakers. Audio-recordings of the speakers’ utterances were then replayed to the listeners, who had to reconstruct the zoo maps on the basis of their speakers’ descriptions. On the group level, we used a GLM approach to model between-brain neural coupling as a function of condition (simple vs complex map). Communicative success, i.e. map reproduction accuracy, was added as a covariate. RESULTS: Whole brain analyses revealed a positive relationship between communicative success and the strength of speaker-listener neural coupling in the left inferior parietal cortex. That is, the more successful listeners were in reconstructing the map based on what their partner described, the stronger the correlation between that speaker and listener's BOLD signals in that area. Furthermore, within the left inferior parietal cortex, pairs in the complex situation model condition showed stronger between-brain neural coupling than pairs in the simple situation model condition. DISCUSSION: This is the first two-brain study to explore the effects of complexity of the communicated situation model and the degree of communicative success on (language driven) between-brain neural coupling. Interestingly, our effects were located in the inferior parietal cortex, previously associated with visuospatial imagery. This process likely plays a role in our task in which the communicated situation models had a strong visuospatial component. Given that there was more coupling the more situation models were successfully aligned (i.e. map reproduction accuracy), it was surprising that we found stronger coupling in the complex than the simple situation model condition. We plan in ROI analyses in primary auditory, core language, and discourse processing regions. The present findings open the way for exploring the interaction between situation models and linguistic computations during communication.