You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster B74, Wednesday, November 8, 3:00 – 4:15 pm, Harborview and Loch Raven Ballrooms

Alpha and beta oscillations in the language network, motor and visual cortex index the semantic integration of speech and gestures in clear and degraded speech

Linda Drijvers1,2,3, Asli Ozyurek1,2,3, Ole Jensen4;1Radboud University, Centre for Language Studies, Nijmegen, The Netherlands, 2Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands, 3Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands, 4School of Psychology, Centre for Human Brain Health, University of Birmingham, United Kingdom

Oscillatory dynamics are thought to subserve the integration of information from multiple modalities. This is particularly relevant during face-to-face communication, which integrates auditory (e.g., speech) and visual inputs (e.g., gestures). Under adverse listening conditions, speech comprehension can be improved by the semantic information conveyed by these gestures. However, when gestures mismatch speech, audiovisual integration might be hindered, especially when speech is degraded and the semantic information from gestures cannot aid to resolve the remaining auditory cues. Here, we used MEG to investigate how oscillatory dynamics support the semantic integration of speech and iconic gestures in clear and degraded speech. Participants were presented with videos of an actress uttering an action verb, accompanied by a matching (e.g., a mixing gesture + 'mixing') or a mismatching gesture (e.g., a drinking gesture + 'walking'). Speech in the videos was presented clear or degraded by using 6-band noise-vocoding. Semantic congruency and speech degradation modulated oscillatory activity in the alpha (8-12 Hz) and beta (15-25 Hz) band. Source analyses revealed larger alpha and beta power suppression in LIFG and visual cortex when speech was accompanied by a mismatching as compared to a matching gesture, but only when speech was clear. This indicates that the visual system is more engaged to allocate attention to mismatching than matching gestures when speech is clear. The observed congruency effects in the LIFG are likely to reflect an increased semantic processing load to resolve the conflict. This conflict reduced when speech was degraded, due to the lack of auditory cues. In clear, but not degraded speech, beta power was more suppressed in (pre)motor cortex when a gesture mismatched than matched the speech signal, suggesting that a listener might simulate the mismatching gesture to re-evaluate its fit to speech. Beta power was less suppressed in MTG/STG, MTL and AG when speech was degraded and gestures mismatched as compared to matched speech. This suggests that listeners try to resolve the speech signal, but that semantic audiovisual integration might be hindered when the mismatching gesture fails to aid retrieval of the degraded input. Our results provide novel insight by revealing how low-frequency oscillations support semantic audiovisual integration in clear and degraded speech: when gestures mismatch and speech is clear, listeners engage the extended language network to process the mismatching gesture. In degraded speech, the extended language network is less engaged, possibly reflecting the hindered coupling of gestures and the degraded signal.

Topic Area: Perception: Speech Perception and Audiovisual Integration

Back to Poster Schedule