My Account

Poster B65, Tuesday, August 20, 2019, 3:15 – 5:00 pm, Restaurant Hall

Sensorimotor processing in language production: Evidence from audiovisual speech entrainment in people with aphasia

Marja-Liisa Mailend1, Gabriella Vigliocco2, Myrna Schwartz1, Laurel J. Buxbaum1;1Moss Rehabilitation Research Institute, 2University College London

Research on speech entrainment demonstrates that some people with chronic non-fluent aphasia show dramatic improvement in fluency when shadowing the speech and mouth movements of another speaker in real time (Fridriksson et al., 2012; 2015). The theoretical and clinical importance of this effect calls for further research to replicate and extend the basic findings. We report early results from an ongoing study of audiovisual speech entrainment that uses improved methods and a case series design with the potential to elucidate which patients benefit from entrainment support, and why. Data are currently available from four participants with aphasia resulting from a left hemisphere stroke. Two participants (NF1 and NF2) presented with a non-fluent profile consistent with Broca’s aphasia (WAB AQ was 68.6 and 48.4 respectively); the other two participants had fluent aphasia of conduction (F1; WAB AQ=71.9) and anomic types (F2; WAB AQ=89.7). Research-quality structural MRI scans are currently available for three participants. The lesion of NF1 localized primarily to premotor cortex and the insula, NF2 had a large perisylvian lesion, and F2 had lesions in the primary motor and the somatosensory cortices. All lesions extended deep into the white matter with significant lesion overlap in premotor cortex. The within-subjects design required participants to produce speech in three conditions. In all conditions, participants watched a Tweetie and Sylvester cartoon and then listened to a narrative that captured the events of the cartoon. Next, in the two entrainment conditions participants imitated the speech of a recorded model in real time. In the Audiovisual conditions, participants heard the model and saw her mouth movements while in the Auditory-Only condition only auditory information was available. Finally, in the Spontaneous Speech condition participants described the events of the cartoon in their own words. The participants’ speech was recorded, transcribed, and coded for three outcome measures: different words per minute, % intelligible script words, and the time lag between the model and the participants’ speech. Findings replicated previous findings in terms of fluency. The greatest number of different words were produced in the Audiovisual condition (mean=30; SD=2) and fewest in the Spontaneous Speech condition (mean=22, SD=11) with Auditory only condition in the middle (mean=28, SD=5). The effect was driven by the non-fluent group. A more consistent pattern across participants was observed for the % of intelligible script words with the average of 82% for the Audiovisual condition and 65% for Auditory Only condition. Furthermore, the average lag between the model and participants’ speech was 158 ms and 151 ms respectively, indicating that speakers were truly entraining rather than merely repeating the words after the model. In summary, this study replicates previous findings of improved fluency under audiovisual speech entrainment for people with non-fluent aphasia, and also shows that people with fluent aphasia may benefit from audiovisual information in producing specifically targeted words. Results from this preliminary sample indicate that people with very different aphasia profiles and overall impairment levels are able to successfully entrain to another speaker in real time.

Themes: Multisensory or Sensorimotor Integration, Disorders: Acquired
Method: Behavioral

Back