Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Studying the association between co-speech gestures, mutual understanding and inter-brain synchrony in face-to-face conversations

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster C13 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Sara Mazzini1, Judith Holler1,2, Peter Hagoort1,2, Linda Drijvers1,2; 1Max Planck Institute for Psycholinguistics, 2Donders Institute for Brain, Cognition and Behaviour

Successful communication rests on achieving mutual understanding and resolving troubles in understanding between interlocutors. Co-speech gestures represent a flexible and adaptable resource to meet different communicative demands; they have been shown to contribute significant amounts of semantic information and to facilitate mutual understanding in interaction. Recently, inter-brain synchrony has been proposed as an important aspect of social interaction and is often deemed to reflect alignment and mutual understanding. Notably, both the spatial orientation towards the interlocutor and the visibility of their bodily signals have been observed to affect inter-brain synchrony, providing further evidence for the importance of visual communicative signals in dyadic interactions. Nonetheless, it has not yet been investigated if the presence of visual communicative signals such as co-speech gestures, which are known to facilitate mutual understanding, actually affect the strength of inter-brain synchrony. In the present study, we used dual-EEG and audio-visual recordings to study if inter-brain synchrony is modulated by the presence of co-speech gestures. We do so in both clear and noisy communication settings to focus on periods of trouble in understanding, which can be elicited by both problems in general understanding as well as problems due to external conditions. Dyads performed a tangram-based referential communication task with and without background noise, while both their EEG and audiovisual behavior was recorded. Representational gestures and semantically meaningless movements are being annotated in the audiovisual data. We compare inter-brain synchrony in moments where representational gestures are used versus meaningless movements in the clear and in the noise condition. Additionally, other-initiated repairs (e.g. clarification requests) are annotated in order to explore whether co-speech gestures modulate inter-brain synchrony during episodes of miscommunication (repair initiations) and episodes of mutual understanding (repair solutions). Overall, we expect higher inter-brain synchrony in repair solutions compared to repair initiations, but we expect the presence of representational gestures to increase inter-brain synchrony within repair conditions. Additionally, we expect that this pattern will be even stronger in the noise condition, due to the auditorily challenging communication setting. Preliminary results will be presented and discussed in light of previous findings on inter-brain dynamics during face-to-face communication.

Topic Areas: Meaning: Combinatorial Semantics, Phonology and Phonological Working Memory

SNL Account Login

Forgot Password?
Create an Account

News