Poster D42, Friday, August 17, 4:45 – 6:30 pm, Room 2000AB

Neural evidence for prediction of animacy features by verbs during language comprehension: Evidence from MEG and EEG Representational Similarity Analysis

Lin Wang1,2, Ole Jensen3, Gina Kuperberg1,2;1Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA, 2Department of Psychology, Tufts University, Medford, MA, USA, 3Centre for Human Brain Health, University of Birmingham, Birmingham, UK

INTRODUCTION: Previous studies have shown that people generate probabilistic predictions at multiple levels of linguistic representation during language comprehension [1]. Here, we used MEG and EEG in combination with Representational Similarity Analysis (RSA) to seek neural evidence for the prediction of upcoming animacy features based on a verb’s selection restrictions. METHODS: MEG and EEG signals were simultaneously recorded from 32 participants, who read three-sentence scenarios. The final sentence was presented word-by-word (450ms; interstimulus interval: 100ms) and included a verb that either selected for animate features (e.g. “cautioned the…”) or inanimate features (e.g. “emptied the…”) of its upcoming noun-phrase argument. ANALYSIS AND RESULTS: For each scenario, at all time points between the onset of the verb and the noun, we extracted a spatial pattern of neural activity across all MEG sensors. We correlated this pattern across all possible pairs of animate-predicting and inanimate-predicting verbs, and then averaged the pairwise correlation R-values to yield two time series of R values. Between 450-600ms after verb onset, the pattern of spatial activity was more similar to animate-predicting than inanimate-predicting verbs. A spatial RSA carried out across all EEG channels revealed a similar result: greater spatial similarity to animate-predicting than inanimate-predicting verbs between 550ms and 600ms after verb onset. CONCLUSIONS: These findings suggest that animate-selecting verbs produce a pattern of neural activity that is more consistent than inanimate-selecting verbs. We suggest that this corresponds to the distinct pattern of neural activity previously associated with animate objects [2], which has been successfully decoded from the MEG signal [3]. If so, our findings provide evidence that comprehenders can use the animacy restrictions of verbs to pre-activate the animate features of nouns before their bottom-up input becomes available. Finally, the converging findings across MEG and EEG suggest that spatial RSA can be carried out using both techniques. References [1] Kuperberg, G. R., & Jaeger, T. F. (2016). Language, cognition and neuroscience, 31(1), 32-59. [2] Grill-Spector, K., & Weiner, K. S. (2014). Nature Reviews Neuroscience, 15(8), 536. [3] Cichy, R. M., Pantazis, D., & Oliva, A. (2014). Nature Neuroscience, 17(3), 455.

Topic Area: Meaning: Combinatorial Semantics

Back