You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.


Poster D19, Friday, August 17, 4:45 – 6:30 pm, Room 2000AB

Multimodal effects on comprehension in left hemisphere stroke

Laurel Buxbaum1, Harrison Stoll1, Anna Krason2, Alessandro Monte2, Gabriella Vigliocco2;1Moss Rehabilitation Research Institute, 2University College London

Face-to-face communication is multimodal in nature, comprising speech as well as co-speech gestures, and speech and gesture share large portions of a left-lateralized neuroanatomic network. Yet studies of language or gesture are typically performed in isolation. Furthermore, most research informing the rehabilitation of language disorders has not taken into account the multimodal information accompanying speech, and studies of limb apraxia (in which gesture comprehension deficits play a prominent role), have rarely considered the influence of language. Consequently, there is limited understanding of the factors that modulate the effects of gestural input on speech comprehension (or the effects of speech on gesture comprehension), the clinical characteristics of the individuals who may benefit from multimodal information (or, potentially, be adversely affected), or which brain regions play critical roles in multimodal gain or disruption. To explore the lesion, cognitive, and psycholinguistic characteristics of patients who benefit from (or are disrupted by) congruent or incongruent speech and gesture, we investigated aphasic and apraxic patients’ comprehension of audiovisual speech, gesture and speech/gesture combinations. Twenty-nine left hemisphere stroke patients and 15 matched controls performed a picture-video matching task in which they were cued in each block to attend to the speech or gesture present in the video. Videos showed an actor speaking, gesturing, or both, and the unattended modality (when present) was congruent or incongruent with the attended modality. Separately, we assessed lexical-semantic control with a semantic triplets task, and gesture recognition with a gesture-word matching task. Finally, we obtained research-quality MRI scans and performed Support Vector Regression-Lesion Symptom Mapping (SVR-LSM) to assess the brain regions that, when lesioned, were associated with abnormally large gains or disruptions (p < .05, corrected for multiple comparisons). Behavioral data indicated that patients showed both gains from congruent cross-modal information and disruptions from incongruent information that were significantly greater than those seen in controls. Furthermore, patients for whom lexical-semantic control was impaired were particularly sensitive to the congruence of gesture information when attending to speech. Conversely, patients for whom gesture comprehension was impaired were particularly sensitive to the congruence of speech information when attending to gestures. SVR-LSM analyses demonstrated a mirrored pattern of gain from congruent cross-modal information when patients attended to speech or gestures. In the speech task, patients with inferior frontal gyrus (IFG) lesions were particularly likely to benefit from congruent gesture, whereas patients with posterior (temporo-parietal junction, TPJ) lesions were less so. In the gesture task, patients with TPJ lesions were particularly likely to benefit from congruent speech, whereas patients with IFG lesions were less so. Thus, multimodal information has a strong impact on the comprehension of patients with left hemisphere stroke. Of relevance to aphasia rehabilitation, there were indications that patients with impaired lexical-semantic access and/or patients with IFG lesions may be particularly amenable to the benefit of co-speech gestures. Additional studies in our labs will explore whether the benefit reflects reliance on the intact gestural channel, or rather, an integration of speech and gesture input to a common conceptual representation.

Topic Area: Perception: Speech Perception and Audiovisual Integration

Back