Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Does prediction drive neural alignment in conversation?

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster E7 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port

Emilia Kerr1, Benjamin Morillon1,3, Kristof Strijkers1,2; 1AMU, 2CNRS, 3INSERM

Recent studies on neural alignment in language (i.e., brain-to-brain synchronisation between interlocutors) have shown that successful communication relies on the synchronisation of the same brain regions in both speakers. However, more explicit mechanistic links between neural alignment and specific linguistic functions of the communicative signal remain to be established. This project relies on the hypothesis that the degree of neural synchronisation between interlocutors depends on the degree of predictive processing: the more predictability between speaker and listener, the more their brain responses will align and display similar oscillatory dynamics (Pickering & Gambi, 2018). We are testing this hypothesis by isolating word semantics (e.g., animal vs. tool word category) in an experimental set-up where (a) prediction effects are tested at the behavioural level; (b) brain activity (EEG) of two interlocutors engaging in simple conversations is recorded simultaneously and analysed in an event-related fashion (i.e., at the word component level instead of the whole communicative signal). Experiment 1 presents a novel interactional task where participants are involved in an association game where speaker A names a picture (either an animal or a tool) and speaker B needs to respond with a semantically related word. Importantly, the predictability for the upcoming object is manipulated, i.e., prior to picture naming, participants hear either a highly predictable or non-predictable sentence up to the final word, which is then finished by speaker A naming an object. Data has been collected from 20 dyads, and the analyses of speech onsets showed a significant reduction of response latencies in the predictable condition, both for speaker A and speaker B. This demonstrates that semantic predictions influence dyadic interaction. In Experiment 2 (being currently analysed) participants are playing the same association game but without predictive priming, i.e., speaker A sees a picture and names it, and speaker B replies with an association. The relevant factor to explore now is whether we can find meaning-specific brain-to-brain synchronisation between tools vs. animals brain regions, which is the defining dimension by which participants need to perform the task. Importantly, tools vs. animals have well-know cortical dissociations in the brain (e.g., Grisoni et al., 2021). Apart from that, while we have no control about the exact words that an interlocutor will reply, we do control the semantic categories of the words, and therefore, this allows us to explore whether we can find brain-to-brain synchrony for specific word meanings (instead of for ‘language’ in general). The analyses methods that we are currently implementing include Riemannian geometry-based EEG decoding and source localisation. Experiment 3, also a dual EEG set-up, will test the hypothesis that this co-activation will be more in synchrony when semantic predictions have primed the target word. References: Grisoni, L., Tomasello, R., Pulvermüller, F. (2021). Correlated Brain Indexes of Semantic Prediction and Prediction Error: Brain Localization and Category Specificity. Cerebral Cortex, 31 (3), 1553–1568. Pickering, M. J., & Gambi, C. (2018). Predicting while comprehending language: A theory and review. Psychological Bulletin, 144(10), 1002–1044.

Topic Areas: Language Production, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News