You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster C79, Thursday, November 9, 10:00 – 11:15 am, Harborview and Loch Raven Ballrooms

Effects of Signal Quality on Audiovisual Integration in Cochlear Implant Users

Hannah Shatzer1, Mark Pitt1, Aaron Moberly1, Jess Kerlin2, Antoine Shahin2;1Ohio State University, 2University of California, Davis

Humans frequently use both auditory and visual signals simultaneously to understand speech when communicating with each other. While individuals with normal hearing typically use auditory speech as the dominant mode of communication, visual speech cues (e.g. mouth shape, tongue and jaw movement) often provide linguistic information and become more valuable for accurate perception when the auditory signal becomes noisy or less reliable, such as in a crowded restaurant. Visual speech cues are even more important for cochlear implant users, whose electroacoustic speech signal is severely impoverished. The Dynamic Reweighting Model (Bhat et al., 2015) posits that if the auditory signal is noisy, a clear visual speech signal will suppress and overwrite early auditory cortex information, causing a reweighting of neural processing in favor of the reliable visual linguistic information. Therefore, cochlear implant users would show a stronger weighting of visual speech information and stronger suppression of early auditory cortex. The current study tested this prediction using postlingually deafened cochlear implant (CI) users and age-matched normal-hearing (NH) controls in an AV identification task. Participants completed an AV task with electroencephalography (EEG) recording in which auditory tokens (/aba/, /aga/, /awa/) were presented at a signal-to-noise ratio (SNR) above or below a predetermined threshold and paired with congruent or incongruent videos of the mouth movements for the tokens. Videos were either clear or highly blurred. Participants identified the consonant they heard, with the addition of a fourth response button for perception of consonants other than /b/, /g/, or /w/ (e.g. the McGurk effect; McGurk & MacDonald, 1976). Behavioral results indicated that while NH controls were always likely to report the consonant presented in the auditory stimulus regardless of SNR or visual clarity, CI users were more prone to reporting the visually presented consonant or a McGurk/fusion percept, particularly when the visual signal was clear and the auditory signal was presented below their SNR threshold. ERP results time-locked to the acoustic onset show P1 suppression for clear visual conditions compared to blurred in both CI and NH participants; however, CI users also show N1 amplitude suppression for blurred conditions that is not present in NH controls. These results together suggest that CI users weight the visual speech signal more heavily in AV integration than NH controls when the visual signal is clear, implying a stronger influence of visual information on early auditory cortex when the acoustic signal is less meaningful. This finding is consistent with the predictions of the Dynamic Reweighting Model: Linguistically salient visual information, as seen in clear as opposed to blurred speech, appears to suppress and overwrite early auditory activity, thus indicating a heavier weighting of visual speech. The effects of visual speech on early auditory cortex are even more pronounced for CI users, who are constantly exposed to an impoverished acoustic speech signal.

Topic Area: Perception: Speech Perception and Audiovisual Integration

Back to Poster Schedule