Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Multimodal language processing in adults with moderate-severe traumatic brain injury

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster C104 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port

Sharice Clough1, Sarah Brown-Schmidt2, Melissa Duff1; 1Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 2Department of Psychology and Human Development, Vanderbilt University

Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world (e.g., size, shape, or movement of objects). Using eye-tracking in an adapted visual world paradigm, we examined how participants with and without moderate-severe traumatic brain injury (TBI) use gesture to resolve temporary referential ambiguity. Participants viewed a screen with four objects and one video. The speaker in the video produced subject-verb-object sentences (e.g., “The girl will eat the very good sandwich”), producing either a meaningful iconic gesture (e.g., sandwich-holding gesture) or a meaningless grooming movement (e.g., arm scratch) during the verb “will eat.” We measured participants’ gaze to the target object (e.g., sandwich), a semantic competitor (e.g., apple), and two distractor items (e.g., piano, guitar) during the critical window between movement onset in the gesture modality and onset of the spoken referent in speech. We used dynamic generalized linear mixed models to predict fixations to the target (1) or not the target (0), with fixed effects for movement type (gesture vs. grooming movement), participant group (TBI vs. non-injured comparison), and their interaction. We included first-order autocorrelation AR(1) and time as covariates. This takes into consideration the tendency of participants to fixate the target over time within a trial and the serial dependency of fixation location from one timepoint to another. We found a significant main effect of movement type (β = 0.61, z = 19.57, p < .001), where non-injured participants were 1.84 times more likely to fixate the target when the speaker produced a gesture compared to a grooming movement. There was no main effect of group (β = 0.29, z = 1.59, p = .11), indicating that participants with TBI did not significantly differ from non-injured peers in their likelihood of fixating the target. A significant interaction between group and movement type (β = -0.22, z = -5.00, p < .001), indicated that the effect of movement type differed by group. Participants with TBI also demonstrated a significant effect of movement type (β = 0.40, z = 13.48, p < .001), however, the effect was attenuated in the TBI group. Participants with TBI were 1.49 times more likely to fixate the target when the speaker produced a gesture compared to grooming movement. We demonstrated evidence of reduced speech-gesture integration in participants with TBI relative to their non-injured peers. This study advances our understanding of the communicative abilities of people with TBI and could lead to a more mechanistic account of the communication difficulties people with TBI experience in rich real-world communication contexts that require the processing and integration of multiple co-occurring cues. This work has the potential to increase the ecological validity of language assessment and provide insights into the cognitive and neural mechanisms that support multimodal language processing.

Topic Areas: Phonology and Phonological Working Memory, Multilingualism

SNL Account Login

Forgot Password?
Create an Account

News