Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Context dependent neural representations of semantic categories: An MEG study using words and images

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster E26 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Jion Tominaga1, Kai Nakajima1, Dmitry Patashov1, Manabu Tanifuji1, Hiromu Sakai1; 1Waseda University

Introduction: Recent research on human conceptual knowledge has revealed that semantic categories are encoded by neural activities (Fernandino et al, 2022; Nishida et al, 2021). However, it is also known that words or objects are recognized differently depending on their contexts (Yee and Thompson-Schill 2016; Gao et al., 2023). In this study, we examine the extent to which neural representations of semantic categories change depending on their contexts, and when such context effects emerge during semantic processing. Using MEG, we measured the neural activities of participants as they were visually presented with pictures/images and words in two distinct contexts: homogeneous and heterogeneous. Preliminary analysis using the Representational Dissimilarity Matrix (Kriegeskorte and Bandettini, 2008) revealed that dissimilarities among semantic categories are larger in a heterogeneous context compared to a homogeneous context when participants process words, but no distinct difference was observed when participants processed picture images. Additionally, these dissimilarities vary according to the time-course of semantic processing. Methods: Eight native Japanese speakers underwent MEG recording (whole-head 64-channel MEG system (Sumitomo Heavy Industries, Ltd.)) while they performed picture-naming and word familiarity rating (four-point scale) tasks. Stimuli (pictures or words) were projected onto a screen within MEG through a prism glass. The tasks were divided into contextualised and decontextualized conditions: a homogeneous block where concepts from the same category were presented consecutively, and a heterogeneous block where concepts from different categories were presented randomly. Each trial consisted of a fixation cross followed by the presentation of a picture/word for 300ms. We prepared eight objects for each of eight semantic categories (Animal, Human, Body part, Vehicle, Food, Inanimate, Man-made place, and Tools/Artefacts). Picture images were selected from the Bank Of Standard Stimulus (BOSS) (Brodeur et al, 2014). Objects were randomly presented six times, yielding 384 trials per condition. The recorded data was preprocessed and analysed using the MNE-Python software package (https://mne.tools/stable/index.html). RDMs were computed based on MEG data averaged across epochs within a category using DTW (dynamic time warping) and a sliding window approach (window size: 100ms, step size: 50ms). Results: Our preliminary analysis from one subject suggests that the temporal pattern of MEG amplitudes in the familiarity rating task is similar between categories sharing similar semantic features (e.g., Man-made places and Humans), while the patterns diverge for category pairs sharing fewer semantic features (e.g., Vehicles and Food). Among the stimulus types (pictures vs words), words tend to be more influenced by context than pictures. Additionally, the dissimilarities in MEG patterns between each pair of categories tend to increase the further away from the stimulus onset. We plan to further compare MEG patterns between contextualised and decontextualized conditions across subjects to elucidate the spatiotemporal and spectral characteristics of modality-specific context sensitivity.

Topic Areas: Meaning: Lexical Semantics, Language Production

SNL Account Login

Forgot Password?
Create an Account

News