Slide Sessions

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Slide Session A: Language Production

Thursday, October 6, 1:30 - 3:00 pm EDT, Regency Ballroom

Chair: Sharon Thompson-Schill, University of Pennsylvania

Diverging neural dynamics of syntactic structure building in naturalistic speaking and listening

Laura Giglio1,2, Peter Hagoort1,2, Daniel Sharoh1,2, Markus Ostarek1,3; 1Max Planck Institute for Psycholinguistics, 2Donders Institute for Brain, Cognition and Behaviour, 3University of Glasgow

In the last decade there has been an increase in studies of naturalistic language comprehension (Brennan, 2016), with the benefit of increased ecological validity and reduced confounds due to the absence of context typical of controlled stimulus sets. Studying naturalistic production may be even more critical, since most production studies use highly artificial tasks to ensure the production of varied speech output. These tasks give speakers virtually no control over what is to be said, although deciding what to say and how is a critical characteristic of production. As a consequence, the current understanding of the neural infrastructure for sentence production is confounded by task requirements. In this study, we aimed to gain a better understanding of syntactic processing in naturalistic production and how it differs from naturalistic comprehension. We analysed an existing fMRI dataset where a group of participants (n=16) freely spoke for several minutes recalling an episode of a TV series (Chen et al., 2017). Another group of participants (n=36) listened to the spoken recall of one production participant (Zadbood et al., 2017). Each text was parsed with a probabilistic context-free phrase-structure grammar (Stanford parser, Klein and Manning, 2003). We then quantified word-by-word syntactic processing as the number of syntactic nodes that are built with each word. Nodes were counted with two parsing strategies that make different predictions about the timing of phrase-structure building operations: either highly anticipatory, predicting increased activity when phrases are opened (top-down), or integratory, predicting increased activity when they are closed (bottom-up). We expected these parsing strategies to successfully predict activity differently in each modality, due to processing differences between production and comprehension. We entered these word-by-word syntactic predictors into a linear mixed-effects model of brain activity in three regions of interest (pars opercularis and pars triangularis of left inferior frontal gyrus (LIFGoper and LIFGtri) and left posterior middle temporal gyrus (LpMTG)). Both parsers added unique contributions to a baseline model including word rate, word frequency and word surprisal extracted with GPT-2. Anticipatory node counts led to a decrease in BOLD activity in comprehension but an increase in in production, showing that syntactic processing happens in early stages in production. Additional parsing strategies were explored that make different predictions about the incrementality of sentence production, showing that phrase-structure building operations are highly incremental and happen before word onset. This was also confirmed by longer speech pauses before words associated with more top-down operations. Integratory node counts instead positively predicted BOLD activity in comprehension and negatively in production, confirming that phrase-structure building has a higher load in later stages of the sentence in comprehension, when all words can be unambiguously merged into the syntactic structure. Both LpMTG and LIFGtri responded to syntactic processing in comprehension, but only the LIFGtri was responsive in production, confirming LIFGtri as a critical hub for syntactic processing across modalities also in task-free designs. Overall, the results show that the unfolding of syntactic processing diverges between speaking and listening and highlight the insights that can be gained by studying naturalistic production.

Beyond Broca: Neural Architecture and Evolution of a Dual Motor Speech Coordination System

Gregory Hickok1, Jonathan Venezia, Alex Teghipco; 1University of California, Irvine, 2VA Loma Linda Health Care System and Loma Linda University School of Medicine, 3University of South Carolina

Classical neural architecture models of speech production propose a single system centered on Broca’s area coordinating all the vocal articulators from lips to larynx. Modern evidence has challenged both the idea that Broca’s area is involved in motor speech coordination and that there is only one coordination network. Drawing on a wide range of evidence, here we propose a dual speech coordination model in which laryngeal control of pitch-related aspects of prosody and song are coordinated by a hierarchically organized dorsolateral system while supralaryngeal articulation at the phonetic/syllabic level is coordinated by the classic ventrolateral speech/language system. We argue further that these two speech production subsystems have distinguishable evolutionary histories and discuss the implications for models of language evolution.

Cerebellar contributions to speech fluency in neurotypical adults

Sivan Jossinger1, Maya Yablonski1, Ofer Amir2, Michal Ben-Shachar1; 1Bar-Ilan University, 2Tel-Aviv University

Producing fast and fluent speech is an astounding human ability which requires the precise integration between several processes, including conceptual framing, lexical access, phonological encoding, and articulatory control. Despite this complexity, studies investigating the neural substrates of speech fluency generally focus on either lexical access or articulatory control, but rarely contrast between the two. Neuroimaging data point to significant cerebellar involvement during verbal fluency tasks (Schlösser et al., 1998) as well as in tasks that require speech rate modulation (Riecker et al., 2005, 2006). Similarly, patients with cerebellar lesions exhibit impaired verbal fluency abilities and significantly slower speech rate compared to controls (Ackermann et al., 1992; Peterburs et al., 2010). Here, we evaluated the contribution of the cerebellar peduncles (CPs) to the lexical and articulatory components of speech fluency. Diffusion imaging data and speech fluency measures were evaluated in 45 neurotypical adults. Unstructured interviews were used to assess natural speaking rate and articulation rate, and timed verbal fluency tasks were used to assess semantic and phonemic fluency. Imaging data were acquired on a 3T Siemens Magnetom Prisma scanner using diffusion weighted single-shot EPI sequence (b=1000 s/mm2; 64 diffusion directions; ~1.7×1.7×1.7mm3 resolution). Probabilistic tractography and constrained spherical deconvolution (CSD) modeling were used to generate individual tractograms. Segmented tracts included the bilateral superior, middle and inferior CPs (SCP, MCP and ICP, respectively). To this end, we developed a new protocol within the automatic fiber segmentation and quantification (AFQ) package to adequately follow the trajectory of the SCP and MCP as they decussate. Spearman’s correlations were calculated between speech fluency measures and diffusion measures along the tracts. The results demonstrate a dissociation in the functional contributions of the CPs in speech production. Specifically, fractional anisotropy (FA) within the right SCP was associated with phonemic fluency (r=.431), while mean diffusivity (MD) within the right MCP was associated with speaking rate (r=-.447) (p<.05, family-wise error corrected). Importantly, partial correlation analyses indicated that the correlation within the right SCP was not driven by speaking rate, and the correlation within the right MCP was not driven by phonemic fluency (partial correlation in right SCP: r=.438, p=.003; and in right MCP: r=-.451, p=.002). Moreover, these effects do not reflect articulatory control, as controlling for the contribution of articulation rate did not change the results (partial correlation in right MCP: r = -0.477, p = 0.001; and in right SCP: r = 0.442, p = 0.003). Finally, no significant associations were found between the CPs and articulation rate, in contrast to previous findings in adults who stutter. Our findings support the involvement of the cerebellum in aspects of speech production that go beyond articulatory control, in neurotypical speakers. Using CSD modeling and probabilistic tracking enabled us to follow the trajectory of the SCP and MCP as they decussate, and to detect novel associations with speech fluency in these pathways. By evaluating multiple measures of speech fluency, our study makes an important contribution to the understanding of the neural basis of speech production in neurotypical adults.

Timing and location of speech errors induced by direct cortical stimulation

Heather Kabakoff1, Leyao Yu2, Daniel Friedman1, Patricia Dugan1, Werner Doyle1, Orrin Devinsky1, Adeen Flinker1,2; 1New York University School of Medicine, 2New York University School of Engineering

Direct current stimulation (DCS) is routinely performed pre-operatively in order to identify eloquent cortical regions to preserve in neurosurgical patients. While various types of interruption to continued speech can indicate language localization, the most dramatic interruption is speech arrest, where the patient is unable to continue speaking. A majority of reports have identified motor cortex and inferior frontal gyrus (IFG) as most likely to elicit speech arrest, though superior temporal gyrus (STG) also induces speech arrest with high variability (e.g., Chang et al., 2017). While many studies have reported behavioral deficits following stimulation, none have investigated the temporal lag from stimulation of a region to actual speech arrest. The present study is the first to provide a map of the temporal dynamics of speech arrest across cortical regions that are critical to speech and language. We hypothesized that the time it takes from stimulation to speech interruption should vary across cortex. Based on report of activity in IFG preceding activity in motor cortex by approximately 250 milliseconds (Flinker et al., 2015), we predicted that stimulation of IFG would take longer than motor cortex to induce speech arrest because the active motor plans would need to be updated. Furthermore, recent evidence has indicated that stimulation of planum temporale leads to speech arrest, indicating that superior temporal activity could also be necessary for speech production (Forseth et al., 2020). Using continuous extra-operative intracranial EEG monitoring with high spatial and temporal resolution, we employed data collected during clinical direct electrocortical stimulation mapping in 20 patients with refractory epilepsy reciting automatic speech (e.g., numbers, days of week, months of year). From 359 measured speech interruptions that were labeled as speech arrest or as a motor-based interruption by the attending epileptologist, 255 followed stimulation in three broad regions of interest (motor cortex, IFG, STG). Among 78 motor hits, 65 (83%) were in motor cortex while only 7 (9%) were in IFG. The 177 speech arrest hits were more balanced across cortex, with 86 (49%) in STG, 60 (34%) in IFG, and 31 (18%) in motor cortex. We observed robust speech arrest in STG with short latencies, followed by frontal cortex, including IFG and motor cortex (in seconds, STG: mean=0.76, SEM=0.067; IFG: mean=1.17, SEM=0.12; motor cortex: mean=1.11, SEM=0.16). Nonparametric testing for the speech arrest hits revealed that region was a significant predictor of latency (χ2=8.97,df=2,p=0.011); a post-hoc pairwise test revealed that latencies in STG were significantly shorter than in motor cortex (p=0.025). In order to control for epileptiform activity we excluded 63 events with afterdischarges and found that region was still a significant predictor of latency (χ2=8.16,df=2,p=0.017). Pairwise tests revealed that latencies in motor cortex (p=0.039) and in IFG (p=0.044) were significantly longer than in STG. These rapid speech arrest events in STG likely indicate that retrieval has been interrupted at the lexical or sublexical level. Alternatively, the inability to continue speaking may also be a result of sudden cessation of auditory feedback, thus interrupting online integration of that feedback into outgoing motor plans.