You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.

Poster A15, Thursday, August 16, 10:15 am – 12:00 pm, Room 2000AB

Audiovisual speech integration in cochlear implant users: A behavioral and optical neuroimaging study

Iliza Butera1, Rene Gifford1, Mark Wallace1;1Vanderbilt University

Cochlear implants (CIs)—widely considered the most successful neuroprosthetic devices—afford over half a million users worldwide access to sound following severe-to-profound hearing loss. However, visual cues remain vitally important for many CI users to interpret the impoverished auditory information that an implant conveys. While auditory-only speech understanding is well characterized in clinical outcome measures, relatively little is known about audiovisual (AV) speech comprehension in this cohort. The aim of this study is to characterize AV integration of speech in CI users compared to normal-hearing controls using both behavioral and neuroimaging approaches. We reasoned that CI users’ high proficiency with visual-only oral communication (i.e, lip reading) may contribute to enhanced audiovisual processing following implantation. To-date, we have recruited 18 adults with CIs who have completed monosyllabic word recognition testing using 224 words arranged into 9 lists of 40 words each. Using these stimuli, we tested word recognition in three modalities (A, V, AV) and three auditory signal-to-noise ratios (SNRs). Because the components of a multisensory stimulus are more effectively integrated when the salience of the components is relatively weak (i.e., greater gain is seen), we conducted this behavioral testing in quiet and in two levels of multi-talker babble that partially masked the target speaker’s voice presented at 60 dB SPL. We sought to measure AV gain at individualized noise levels that approached identification of 20% of words in the auditory-only condition as well as a moderate noise level targeting 50% identification. The resulting SNRs across all subjects ranged from +15 to -5 dB, and we quantified audiovisual integration using the formula: (AV - max(A,V) ) / max(A,V) x 100%. Preliminary analyses indicate that the CI cohort experiences AV gain both with and without background noise. However, multisensory-mediated gain measured in quiet (median= 29%) was not significantly different than gain at a moderate noise level (median= 63%; Mann-Whitney U= 80, n1 = 18, n2 = 13, p = 0.1 two tailed). In comparison, AV gain measured in high noise SNRs (median 106%) was significantly greater than the moderate SNRs (Mann-Whitney U= 41, n1 = 17, n2 = 13, p = 0.003 two tailed). Ongoing recruitment of an age-matched, normal-hearing control group will allow us to perform a between-groups comparison to test whether this AV gain in CI users is greater than controls, particularly at the highest noise level. Additionally, we have collected optical neuroimaging data using functional near-infrared spectroscopy (fNIRS) in CI users (n = 15) to further test whether greater recruitment of multisensory areas like the superior temporal sulcus (STS) is also evident in a between-groups comparison. The overall goal of this work is to better understand audiovisual integration and how it relates to speech comprehension of CI users. This knowledge is essential for our understanding of proficiency with a CI and, most importantly, for how users can best utilize all sensory information to enhance speech intelligibility and improve quality of life.

Topic Area: Perception: Speech Perception and Audiovisual Integration