Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Concreteness pre-activated earlier than word length during visual predictive priming: Evidence from EEG decoding

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster A118 in Poster Session A, Tuesday, October 24, 10:15 am - 12:00 pm CEST, Espace Vieux-Port

Timothy Trammel1, Natalia Khodayari2, Matthew J. Traxler1, Tamara Y. Swaab1; 1University of California, Davis, 2Johns Hopkins University

There is a large body of research providing evidence of prediction facilitating language comprehension. However, less is known about what features of upcoming words are pre-activated during this predictive processing. Predictive coding theory suggests that top-down predictions are being made continuously and compared to bottom-up sensory input to generate prediction error. Therefore, some portion of the content of predicted upcoming words should be present prior to encountering that word and as the process is top-down, higher-level features should be predicted prior to lower-level features. This should be reflected in the electroencephalogram (EEG) signal recorded at the scalp. However, to date, univariate measures of EEG have not shown evidence of this hierarchy prior to the onset of the predicted stimulus. Machine learning classification – or decoding – allows us to decode the content of EEG signals and better explore which linguistic features are represented prior to onset of an upcoming word. The present study aims to use a support vector machine (SVM) to decode semantic and lexical content represented within EEG data from a visual predictive priming paradigm. If predicted features are being pre-activated in a top-down fashion, then semantic features should be reliably decodable before target onset, and they should be decodable earlier than lexical features. Participants (n=45) were shown a prime word and instructed to try to predict the upcoming target word. In each trial (480 total trials), word pairs were either related (circus – CLOWN; 320 trials) or unrelated (trim – CLOWN; 160 trials). The forward association strength (range: .4 - .6; mean = .5) of related trials were controlled such that participants had approximately a 50% chance of predicting the target word. This paradigm generates three, plausible experimental conditions: predicted (a related pair in which the participant successfully predicted the target), related (a related pair in which the participant did not predict the target), and unrelated (an unrelated pair in which the participant could not predict the target). In each of these conditions, we used SVM to decode concreteness (a semantic feature) and word length (a lexical feature) of the upcoming target word during a 4000 ms epoch locked to the onset of the prime word. Target word onset occurred 2000 ms after prime onset. This allowed us to assess whether these features were activated before or after the target. We performed cluster-based permutation testing to compare SVM performance against chance-level accuracy (50%) using cluster-based permutation testing. We found that within the predicted condition, concreteness could be reliably decoded prior to the onset of the target word, but not post-target onset. In the related condition, we observed reliable decoding of concreteness before and after target onset. In contrast, we did not observe reliable decoding of concreteness prior to target onset in the unrelated condition. Word length was only decodable after target onset in all three conditions. These findings suggest pre-activation of semantic features occurs prior to lexical features and aligns well with predictive coding models of predictive pre-activation.

Topic Areas: Reading, Computational Approaches

SNL Account Login

Forgot Password?
Create an Account

News