Poster D65, Friday, August 17, 4:45 – 6:30 pm, Room 2000AB
Gesture incongruity effects on speech preserved with verbal but not visuospatial WM load: An ERP study
Jacob Momsen1,2, Jared Gordon1, Seana Coulson1;1University of California San Diego, 2San Diego State University
The present study investigated the roles of verbal and visuospatial working memory (WM) during the comprehension of speech accompanied by iconic gestures. In previous experiments, participants engaged in discourse comprehension tasks using naturalistic videos while concurrently performing secondary WM tasks to tax either verbal or visuospatial WM resources. These studies suggest a relatively significant role of visuospatial WM in integrating information in co-speech gestures with concurrent speech. However, these studies relied on behavioral responses that occurred after the multimodal discourse. Here we utilize EEG to compare the impact of verbal versus visuospatial memory load on real-time processing of speech in multimodal discourse. EEG was recorded as healthy adults performed a verbal (n=14) or a visuospatial (n=14) WM task. In the verbal condition, participants encoded a series of either one (low load) or four (high load) digits; in the visuospatial condition, they encoded a series of either one or four dot locations on a grid. During the rehearsal period of this WM task, participants observed a video of a man describing objects followed by a picture probe that showed the referent of his discourse. Gestures matched the speech in half of the videos, and mismatched the speech in the other half. ERPs were time-locked to the onset of the final item encoded during the memory task, the first content word in each video, and the onset of the picture probes. ERPs time locked to the final item in the memory encoding task were measured 200-500ms post stimulus onset. Visuospatial load resulted in a widely distributed positivity, whereas verbal load produced a negativity (Dots: F(1,13)=18.1, p < 0.001; Digits: F(1,13)=52.7, p <0.001). Differences in the load effects confirm that partially non-overlapping brain regions were recruited for each memory task. N400 effects for the first content words in the videos were measured 200-400ms post stimulus onset. Repeated measures ANOVA in the digits condition with factors Load (low/high), Gestures (match/mismatch), and ROI revealed an interaction of Gestures by ROI (F(6,78)=3.54, p < 0.05). An identical ANOVA revealed no significant effects in the dots condition. The N400 effect in the digits condition suggests participants’ sensitivity to gestural information was preserved under the imposition of a verbal load. Its concomitant absence in the dots condition suggests co-speech gesture comprehension was compromised by the load on visuospatial WM. The mean amplitude of ERPs to pictures was measured 200-500ms post stimulus onset to index the N300/N400. Pictures elicited no gesture effects in the digits condition. In the dots task, analysis revealed an interaction between Load and Gestures, reflecting gesture effects only in high load trials ((F(1,13)=6.39, p < 0.05). Associated with poor task performance, we hypothesize that gesture effects on pictures emerge when participants were unable to suppress irrelevant gestural information during the videos. Speech-gesture incongruity effects emerged on the speech under verbal load, and on picture probes presented afterwards under visuospatial load. Observed differences of the timing of speech-gesture incongruity effects thus support a dissociation in the contribution of verbal and visuospatial WM to multimodal discourse comprehension.
Topic Area: Signed Language and Gesture