Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Tracking the neural coding shift from low-level linguistic features to sensorimotor control signals.

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster E102 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Marion Bouffier1, Reidar Riveland1, Tomás Gallo Aquino2, Nuttida Rungratsameetaweemana3, Alexandre Pouget1, Valentina Borghesani1; 1University of Geneva, Geneva, Switzerland, 2Cedars-Sinai Medical Center, Los Angeles, California, 3Salk Institute, San Diego, California

Human linguistic intelligence allows us to appropriately follow verbal instructions we have never encountered before, an ability known as “zero-shot learning”. At least three processing stages are required: the linguistic analysis of the instructions, the extraction of their meaning, and the sensorimotor implementation of the required behaviour. To investigate language’s ability to scaffold sensorimotor representations to achieve zero-shot learning, we will leverage a recent artificial model put forward by Riveland and Pouget (2022). In this ongoing functional magnetic resonance study, we will aim at elucidating the nature of the underlying computations and representations by comparing those inferred from neural and behavioural human data to those observed in an artificial model trained to perform a similar task (Schrimpf et al., 2020). We will recruit young adults with no history of neurological or psychiatric disorders and scan them while they perform a judgement task: they will see two pictures and be asked to perform a simple choice based on verbal instructions. The instructions will reflect two conditions: judgement on the size or colour of the images. To study meaning-related changes in the representations while controlling for other linguistic factors, the sentences (12 in total; 6 per condition) will have similar meanings, but vary in terms of syntactic complexity (i.e., passive, negative, relative), such as “Choose the image with the most colours” versus “The image with the most colours must be chosen”. Finally, to track the effect of the selective attentional focus the verbal instructions bring about, the pictures will belong to categories known to elicit activity in specific patches of the ventral visual path (cars, fruits/vegetables, animals) and the instructions will be displayed either at the start of the trial, or between the two pictures. The resulting acquisitions will be analysed using multivariate pattern analysis, both in regions of interest (ROIs) and through a searchlight approach. The following regions will be included in our ROIs: the primary visual cortex, a region known to be involved in lower-level visual processing (Harrison & Tong, 2009); the anterior temporal lobe and the angular gyrus, regions associated with semantic processing (Farahibozorg et al., 2022); the supplementary motor area and the premotor cortex, known to be associated with behavioural implementation (Konoike et al., 2015). We will use representational similarity analysis to determine the level of similarity between the different trials and between the results observed in humans versus computational models (Kriegeskorte et al., 2008). Our results, describing the representational shift required to perform task-appropriate behaviours following minimal verbal instruction, will pave the way to the answer to a key question: what computational advantage do language representations offer?

Topic Areas: Control, Selection, and Executive Processes, Computational Approaches

SNL Account Login

Forgot Password?
Create an Account

News