Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Convolutional networks can be used to model the functional modulation of MEG responses during reading

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster B90 in Poster Session B, Tuesday, October 24, 3:30 - 5:15 pm CEST, Espace Vieux-Port

Marijn van Vliet1, Oona Rinkinen1, Takao Shimizu1, Anni-Mari Niskanen1, Barry Devereux2, Riitta Salmelin1; 1Aalto University, 2Queen's University Belfast

Reading elicits a series of evoked responses along the left ventral stream. In MEG, notable ones are the Type I, Type II, and N400m. The location, timing, and functional behavior of these responses to different stimuli tell the story of a processing pipeline starting with basic visual analysis (Type I) to letter detection (Type II) to lexical analysis (N400m). In this study, we sought to understand this pipeline better by implementing it as a computational model. In contrast to classic models of reading, ours starts with raw pixels, which is required if one wants to reproduce all three aforementioned evoked responses. By presenting the same stimuli to both human and model, we evaluated the model's accuracy both qualitatively (response patterns to experimental contrasts) and quantitatively (correlation with MEG evoked response amplitudes). Our results show how a basic VGG11 architecture trained on ImageNet succeeds in simulating the Type I response but fails to simulate the Type II and N400m. In subsequent models where we introduced noisy activations on the units, expanded the vocabulary size, and introduced language statistics into the training set, we obtained our final model that accurately simulated all three MEG evoked responses.

Topic Areas: Computational Approaches, Reading

SNL Account Login

Forgot Password?
Create an Account