My Account

Poster A81, Tuesday, August 20, 2019, 10:15 am – 12:00 pm, Restaurant Hall

MOUS, a 204-subject multimodal neuroimaging dataset to study language processing

Jan Mathijs Schoffelen1, Robert Oostenveld1,3, Nietzsche Lam1,2, Julia Uddén1,2,4, Annika Hultén1,2,5, Peter Hagoort1,2;1Radboud University, Donders Institute, 2Max Planck Institute for Psycholinguistics, 3Karolinkska Institute, Stockholm, 4Stockholm University, 5Aalto University

Here we present an open access dataset, colloquially known as the Mother Of Unification Studies (MOUS) dataset, which contains multimodal neuroimaging data that has been acquired from 204 healthy human subjects. The neuroimaging protocol consisted of magnetic resonance imaging (MRI) to derive information at high spatial resolution about brain anatomy and structural connections, and functional data during task, and at rest. In addition, magnetoencephalography (MEG) was used to obtain high temporal resolution electrophysiological measurements during task, and at rest. All subjects performed a language task, during which they processed linguistic utterances that either consisted of normal or scrambled sentences. Half of the subjects were reading the stimuli, the other half listened to the stimuli. The resting state measurements consisted of 5 minutes eyes-open for the MEG and 7 minutes eyes-closed for fMRI. The neuroimaging data, as well as the information about the experimental events are shared according to the Brain Imaging Data Structure (BIDS) format. This unprecedented neuroimaging language data collection allows for the investigation of various aspects of the neurobiological correlates of language.

Themes: Reading, Speech Perception
Method: Other

Back