Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

A comparison of different methods for new word learning and the underlying neural representations

Poster D110 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Shuai Wang1, Sophie Restoy1, Julien Sein2, Bruno Nazarian2, Jean-luc Anton2, Anne-Sophie Dubarry3, Clément François1, Felipe Pegado4, Franck Lamberton5, Chotiga Pattamadilok1; 1Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France, 2Aix Marseille Univ, CNRS, Centre IRM-INT@CERIMED, Institut des Neurosciences de la Timone – UMR 7289, Marseille, France, 3Aix Marseille Univ, CNRS, LNC, Marseille, France, 4Université Paris Cité, LaPsyDÉ, CNRS, F-75005 Paris, France, 5CERMEP – Imagerie du vivant, MRI Department and CNRS UMS3453 , Lyon , France

Audio-visual association is fundamental in language acquisition. Early on, infants perceive speech with speakers’ articulatory gestures (Aud-Artic learning) and later, when they learn to read, speech is associated with orthography (Aud-Ort learning). Our previous behavioral study suggests that new words which were learned through different methods might be consolidated and stored in the mental lexicon differently (Pattamadilok et al., 2021), with multimodal learning methods leading to higher learning efficiency than unimodal. This difference in learning efficiency could be due to the nature of the underlying representations built up through the different learning methods. The present study combines a learning paradigm and fMRI to 1) determine whether new words which were learned through different methods evoke the same or different brain activity, and 2) investigate which learning method leads to the brain activity that is most similar to known words. METHODS: 25 native French speakers were recruited. Participants were asked to learn three sets of 15 novel words associated with 15 unknown objects through three different methods, i.e., Audio, Aud-Ort and Aud-Artic, in a within-subject design. The fMRI acquisition was conducted in two sessions: Immediately after the learning phase and ~24hrs later. In each session, participants performed an auditory lexical decision (ALD) task on five types of spoken input: pseudowords, known words and new words learned through each of the three methods. FMRIPrep and AFNI were used for processing MRI data. Linear-mixed models were applied for the group analysis using the ‘lme4’ package in R. Both ROI-based and whole-brain analyses were conducted. Three ROIs were selected for three modality-specific processing, i.e., left-STG for speech processing, left-vOT for orthographic processing, left-SMA for articulatory gestures processing. For each ROI, several linear-mixed models were created by considering Stimulus Type, Session and their interaction as fixed factors, learning performance, accuracy and/or RT of the ALD task as covariate(s), participants and/or their response specificity and sensitivity as random factor(s). According to AIC and BIC, the optimal model was BOLD = Stimulus Type * Session + Participant. This model was then applied to the whole-brain analysis. SUMMARY: As expected, preliminary analyses showed no significant Stimulus Type effect in the left-STG (p > 0.13), indicating that the area was involved in speech processing regardless of learning methods. Both left-vOT and left-SMA showed a significant Stimulus Type effect (ps < 2e-5). The post-hoc tests revealed that, in left-vOT, Aud-Ort new words evoked higher activation than known words (p < 0.038). In left-SMA, all types of new words evoked higher activation than known words (ps < 0.031) while pseudowords evoked higher activation than known words and Audio new words (ps < 0.034). Although the ROI-based analysis did not show any consolidation effect, the whole-brain analysis revealed a stronger activation in session 2 for Aud-Artic learning in right Fusiform and middle Occipital (FDR q < 0.05). In short, learning modality seems to have an impact on how spoken words are encoded in the different areas in the language network even when the input was presented in auditory.

Topic Areas: Multisensory or Sensorimotor Integration, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News