Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Do surprisal and entropy affect delta-band signatures of syntactic processing?

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster D33 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port

Sophie Slaats1, Antje S. Meyer1,2, Andrea E. Martin1,2; 1Max Planck Institute for Psycholinguistics, 2Donders Institute for Brain, Cognition and Behavior

When we understand language, we recognize words and combine them into sentences. How do we do this? In this work, we used a naturalistic listening paradigm and magnetoencephalography (MEG) to explore the hypothesis that listeners use probabilistic information about words to infer abstract sentence structure. Lexical probability has proven to be a strong predictor for neuroimaging and behavioral data. For example, higher surprisal values tend to lead to slower reading times (e.g., Aurnhammer & Frank, 2019), oscillations in the delta, beta, and gamma bands track lexical surprisal (Weissbart et al., 2020; Gillis et al., 2021), and transitional probabilities can induce low-frequency power modulations (Bai et al., 2022; Batterink & Paller, 2017). At the same time, syntactic structure building is crucial for comprehension (e.g., Coopmans et al., 2022). Several frequency bands show signatures of this process, particularly the delta (Brennan & Martin, 2019; Bai et al., 2022; Kaufeld et al., 2020; Lo et al., 2022; Li & Hale, 2019; Meyer et al., 2017; Ten Oever et al., 2022) and gamma bands (Nelson, 2017; Peña & Melloni, 2012). We test a framework in which lexical distributional information is a cue for latent linguistic structure (Martin, 2016; 2020). We ask whether the neural encoding of linguistic structure changes as a function of the distributional properties of a word. We used temporal response functions (TRFs) to compare responses to syntactic annotations of the stimuli between words with high and low surprisal/entropy values. If distributional information affects the computation of syntactic structure, the neural encoding of syntactic structure should differ between high- and low surprisal and entropy words. If they do not differ, distributional and structural information may exist alongside each other, but are unrelated to each other in their neural encoding. We analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 minutes. To create syntactic features, we manually parsed all stories using a simplified minimalist approach and obtained bracket counts – a metric of syntactic depth – according to ‘top-down’ and ‘bottom-up’ parsing strategies (Brennan et al., 2016; Nelson et al., 2017; Coopmans et al., 2022). We then divided the bracket counts according to the median lexical surprisal/entropy values (obtained with GPT2) into ‘high [surprisal/entropy]’ and ‘low [surprisal/entropy]’ sets. Using TRFs, we estimated the response to each of these syntactic features and compared the responses from ‘high’ sets with the ones in ‘low’ sets. Preliminary results suggest that surprisal and entropy affect delta-band responses to syntactic information. Both metrics affect the response to bottom-up bracket counts, with effects for entropy seen mainly early in the time-window (before 500ms after word onset), and effects of surprisal spanning the entire time-window (until 1000ms after word onset). These and further findings may suggest that the brain uses probabilistic information to reach a structured, meaningful representation of the input. The findings are consistent with models that see language comprehension as a probabilistic process of mapping perceptual input onto abstract, deterministic representations (Martin, 2020; Hale et al., 2022).

Topic Areas: Syntax and Combinatorial Semantics, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News