Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

How does audiovisual prosody influence spoken language comprehension?

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster C106 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Ambra Ferrari1,2, Peter Hagoort1,2; 1Max Plank Institute for Psycholinguistics, Nijmegen, The Netherlands, 2Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands

Natural face-to-face communication relies not only on speech but also on bodily signals such as manual gestures. Beat gestures represent rhythmic non-meaningful hand movements that typically accompany prosodic stress in conversation. Yet, it is still a matter of debate whether and how beats, as well as their co-occurrence with prosody (i.e. audiovisual prosody), influence language comprehension. On the one hand, beats may function as visual focus markers. Accordingly, EEG evidence suggests that beats facilitate phonological, semantic, and syntactic processing of spoken sentences. Hence, it is conceivable that beats drive attention towards the concurrent speech signal and thereby elicit deeper processing of the companion constituent. On the other hand, beats may allow inferences about the speaker's metacognition. Accordingly, bodily features such as gaze and eyebrow movements seem to influence how listeners grade the speaker's level of knowledge in question answering. Similarly, beats may trigger inferences of high speaker confidence and thereby elicit the expectation of correctness. In a behavioural study, we directly disentangled the two hypotheses. Further, we evaluated whether and how beats interact with prosodic stress to influence spoken language comprehension. Finally, we contrasted beats with grooming gestures that are closely matched in terms of kinematics but are not perceived as communicatively intended, nor meaningfully related to the speech they accompany. In a comprehension task, we presented participants with unique spoken sentences sometimes containing a slight semantic anomaly, which may go unnoticed depending on the level of processing of the corresponding constituent (“semantic illusion”). In a 2×2×3 repeated measures design, we independently manipulated semantic congruence (yes, no), prosodic stress (present, absent) and manual gesture (videos of a speaker performing beat, grooming or no gesture). Prosodic stress and manual gestures were placed on the critical word that dictated the presence of a semantic anomaly. To probe the degree of semantic illusion, participants were instructed to focus on the meaning of each spoken sentence and report as accurately and fast as possible whether the sentence was true or false in a yes/no forced choice task. We evaluated participants' accuracy and response times to verify whether and how manual gestures and prosodic stress influenced the semantic illusion. If beats (and/or prosodic stress) functioned as focus markers, they would decrease the semantic illusion; if they elicited the expectation of correctness, they would instead increase the illusion. Preliminary results show an interactive effect of manual gestures and prosodic stress: participants showed a higher degree of semantic illusion when prosodic stress was combined with beats, compared to grooming and no gesture; the corresponding response times were also significantly faster, suggesting a quick resolution of participants’ responses towards a bias for correctness. Thus, the combination of beats and prosody may trigger inferences of high speaker confidence, drive the expectation of correctness and thereby elicit shallower processing of the companion constituent, in line with Grice’s cooperative principle in conversation. An ongoing fMRI study will elucidate which brain mechanisms are responsible for this effect, which highlights the influence of metacognition on language comprehension in multimodal face-to-face communication.

Topic Areas: Phonology and Phonological Working Memory, Perception: Auditory

SNL Account Login

Forgot Password?
Create an Account

News