Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Decoding pictures and words with MEG signals: Evidence for shared neural representation of semantic categories

Poster A2 in Poster Session A, Tuesday, October 24, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Kai Nakajima1, Jion Tominaga1, Dmitry Patashov1, Rieko Osu1, Hiromu Sakai1; 1Waseda University

Introduction People can recognize semantic categories from objects even though they are presented in different modalities such as images or words. Yet, it is currently unknown as to what extent neural representations of semantic categories are independent of the modalities. Recent studies using Representational Similarity Analysis of fMRI data revealed that semantic category information is represented widely in hetero-modal cortical areas (Fernandino et al, 2022). The time-course of neural representation of semantic object categories are examined using Multivariate Pattern Analysis of MEG data (Carota et al, 2022). However, previous studies do not address the question of whether these representations are shared between objects presented in different modalities (Rybar et al, 2022 for a review). To answer this question, we conducted cross-decoding of semantic categories from MEG signals where participants are presented with picture images and words. Materials and Methods Eight native Japanese speakers participated in the experiment. MEG data were recorded using a whole-head 64-channel MEG system (Sumitomo Heavy Industries, Ltd.) while the participants were performing tasks of looking at picture images or words projected on the screen. In picture condition, participants are asked to name the object orally. In word condition, they were asked to judge the familiarity of objects in 4-points scales. Pictures or words are displayed for 300 ms after 500 ms presentation of a fixation cross. We prepared 8 different objects (Animal, Human, Body part, Vehicle, Food, Inanimate, Manmade place, and Tool artifact). Objects were presented randomly 6 times each, yielding 384 trials in total. The picture condition and the word conditions were counter-balanced among subjects. Recorded data were preprocessed using MNE-python software package (https://mne.tools/stable/index.html). The classification was performed using the support vector machine in scikit-learn (https://scikit-learn.org/stable/) Python machine learning library. The decoding accuracy was calculated using a hold-out method. Training data (80%) and test data (20%) were randomly sampled from the entire sets of data. Magnetic field amplitudes from each sensor channel are averaged within every 50 ms time-window after the stimulus onset. A support vector machine was used for classification. Results As of now, we obtained the result of cross-decoding of eight categories for one subject. The chance level for the eight categories is 12.5%. The cross-decoding accuracy of image dataset using the word dataset for training was 19.1% ~ 28.9%. Whereas the cross-decoding accuracy of word dataset using image dataset for training was 13.3% ~ 19.4%. The asymmetric results suggest that there are more modality specific information evoked by pictures. We expect to finish analyzing all eight subjects during the summer. We plan to examine the time-course of formation of neural representation of semantic categories by decoding data from different time-windows. We anticipate that the classification accuracy of cross-decoding will reach maximum at around 100 ms ~ 200 ms after stimulus onset based on previous MEG and EEG research of word and image decoding.

Topic Areas: Language Production, Computational Approaches

SNL Account Login

Forgot Password?
Create an Account

News