Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Leveraging EEG decoding to examine automaticity of predictive coding during speech comprehension

Poster A68 in Poster Session A, Thursday, October 6, 10:15 am - 12:00 pm EDT, Millennium Hall
This poster is part of the Sandbox Series.

Timothy Trammel1, Matthew J. Traxler1, Tamara Y. Swaab1; 1University of California, Davis

Predictive coding is a critical theory of prediction during language and cognitive processing (Clark, 2013; Friston, 2009; 2010; Friston & Keibel, 2009; Rao & Ballard, 1999). Predictive coding assumes that higher hierarchical cortical levels continuously make top-down predictions of information at lower cortical levels. As new bottom-up sensory information becomes available to lower levels, the brain computes a prediction error – the difference between the top-down predicted input and the actual bottom-up sensory input – which is passed back up to update higher-level representations and allow for better future predictions. Heikel and colleagues (2018) examined predictive coding during auditory sentence processing using a temporal generalization classification method (King & Dehaene, 2014) to decode electroencephalogram (EEG) signals. Based on predictive coding models, Heikel et al. (2018) hypothesized that, in the absence of bottom-up input, the prediction error signal should only contain information about the top-down predicted input. They tested this hypothesis by presenting participants with highly constraining auditory sentences in which critical words were unexpectedly delayed by 1000ms of silence and were either 1) animate or inanimate or 2) concrete or abstract. The EEG signal during this delay was decoded for either animacy or concreteness at significantly above-chance levels in the silent period prior to the onset of that word. These findings support their predictive coding hypothesis that prediction error EEG signals carry specific information about pre-activated features of words in highly constraining spoken sentences. However, this study does not address the question whether semantic features of words are pre-activated under all circumstances. Recent studies indicate that predictive processing effects during language comprehension are graded by task (Brothers et al., 2017) and speaker reliability (Brothers et al., 2019) as measured by the N400 event-related potential (ERP) component. This suggests that facilitation by prediction is not necessarily an automatic process. The present study aims to address this question by manipulating the proportion of sentences with highly predictable (high cloze) and unpredictable (low cloze) target words – either animate or inanimate – to generate 80% high cloze and 20% high cloze conditions. As in the Heikel et al. study (2019), we will use decoding to examine if animacy features of critical words can be reliably decoded in the 1000ms silence before critical word onset. Additionally, we will examine effects of cloze probability and animacy on the amplitude of the N400 to the critical words. If prediction is automatic, then we should see significantly above-chance decoding accuracy in high cloze sentences in both the 80% high cloze and the 20% high cloze conditions. We expect to see reduced posterior N400 amplitude in high cloze relative to low cloze critical words. If animacy features are pre-activated, then we expect differences between animate and inanimate frontal negativity in low cloze sentences, but not in high cloze sentences. However, if facilitation from prediction is graded, then both ERP effects will be mitigated in the 20% high cloze condition. Together these decoding and ERP results would provide evidence that prediction is automatic, but facilitation from prediction is not.

Topic Areas: Speech Perception, Computational Approaches