Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Tracking lexical access during sentence production

Poster C47 in Poster Session C, Friday, October 7, 10:15 am - 12:00 pm EDT, Millennium Hall
Also presenting in Poster Slam C, Friday, October 7, 10:00 - 10:15 am EDT, Regency Ballroom

Adam Morgan1, Werner Doyle1, Orrin Devinsky1, Adeen Flinker1; 1NYU School of Medicine

During word production, stages of lexical representation come online in a feed-forward sequence beginning with the conceptual, then grammatical, phonological, and finally articulatory (Levelt, 1989; Indefrey, 2011). However, this model is based primarily on picture naming data, and recent work indicates that the temporal dynamics of lexical access may be fundamentally different during sentence production (Momma & Ferreira, 2019). Here, we employ direct neural recordings (ECoG) in humans and leverage a machine learning (decoding) approach to track the activation of two stages of lexical representation during sentence production, elucidating (1) the patterns of neural activity that code for discrete stages of lexical representation (conceptual/articulatory), and (2) the temporal dynamics of these stages during various production tasks. Eight patients undergoing neurosurgery for refractory epilepsy repeatedly produced 6 nouns (dog, ninja, etc.) in response to cartoon images while electrical potentials were measured directly from cortex. In picture naming blocks, patients saw an image of a cartoon character and responded overtly (“dog”). In scene description blocks, patients saw cartoon images of the same characters embedded in static scenes and produced corresponding sentences (e.g. “The dog tickled the ninja”). We were able to predict above chance (p<0.05 permutation test, accuracy level ~30%) which of the 6 nouns a subject was about to produce during sentence production using multi-class classifiers trained on data from the picture naming block. To track words’ articulatory representations during sentence production, we leveraged neural activity patterns during picture naming which encode articulatory information in the time window just prior to articulation to train an “articulatory classifier.” We then used this classifier to predict word identity during sentence production trials. This analysis accurately predicted word identity ~200ms prior to each word’s articulation onset in sentences. Next, we aimed to track the conceptual stage of lexical representation during sentence production. Conceptual processing is the first stage of lexical access, so we assume that the first word-specific differences in neural activity patterns after stimulus onset during picture naming (excluding vision-related electrodes) reflect conceptual information. In preliminary analyses, we identified this time period for each patient by decoding word identity in just the picture naming data at each time point from stimulus onset using 10-fold validated multi-class classifiers. Once the time point was identified, we used all of the picture naming trials from that time point to train a “conceptual classifier,” which we successfully used to predict word identity during sentence production. For the first noun in a sentence (the subject), conceptual representations were detected at comparable times (relative to articulation onset) to nouns produced during picture naming. However, the second noun in a sentence (the object) was detected later, suggesting that for words in the middle of a speech stream, the sequence of lexical stages may be temporally condensed. Our results constitute an important step toward understanding how the brain accesses words during fluent speech, and toward linking neural spatiotemporal codes to theoretical models of sentence production.

Topic Areas: Language Production, Control, Selection, and Executive Processes