Slide Slam O8
Audiovisual speech perception in typically hearing and cochlear implant-using children: an ERP study
Elizabeth Pierotti1,2, Sharon Coffey-Corina1, Lee Miller1,3, David P. Corina1,2,4; 1Center for Mind and Brain, University of California, Davis, 2Department of Psychology, University of California, Davis, 3Department of Neurobiology, Physiology, & Behavior, University of California, Davis, 4Department of Linguistics, University of California, Davis
Most spoken language occurs under conditions in which the listener has access to both auditory (i.e. acoustic) and visual (i.e. facial) information. A large body of behavioral research shows benefits of audiovisual (A/V) over audio-only (A) presentations in word recognition, lexical decision and sentence processing and under conditions of noise (Bernstein et al., 2004; Ma et al., 2009). Current electrophysiological studies of A/V speech perception have shown the effects of the modality of presentation on early ERP components (N1/P2) and later N400 components (Basirat et al., 2018; Brunellière et al., 2020; Pilling, 2009). These components are consistently responsive to speech, with amplitudes and latencies that are modulated by the presence of visual information. Data indicate that there is typically an attenuation of the ERP components associated with speech processing under conditions of A/V versus A-only presentations. Developmental changes have also been reported, with younger children showing less influence of visual speech cues on auditory ERP components compared to older children (Knowland et al., 2014). We investigate the contributions of A/V speech processing in congenitally deaf children who have received a cochlear implant (CI). Continuous EEG data was collected from deaf children with CI (n = 30; mean age = 81 mos) and typical hearing children (n = 19; mean age = 75 mos) during a word-picture priming paradigm. This paradigm consisted of Audio-Visual presented word primes that preceded picture targets. In the current work we explore the effects of processing of the spoken Audio-Visual primes in these data. Results demonstrate more positive P1 and P2 responses for CI-using children compared to typical hearing controls (p = .017 and p = .043, respectively), but no group differences in N400 responses (p = .142). We also explored age- and experience-related factors. Overall visual P1 amplitude was correlated with participant age (R = 0.286, p = .004). P2 latency was found to decrease with age in the control group (R = -0.199, p = .014), but not in the CI group (R = -0.07, p = .278). However, CI users’ time-in-sound was positively correlated with P2 amplitude (R = 0.18, p = .005). These P2 data may suggest differences in attentional mechanisms related to A/V speech in experienced CI users. In sum, audiovisual speech may evoke greater visual reactivity and differentially engage attention in CI users relative to controls. These differences in early sensory and attentional processing, however, lead to comparable comprehension and semantic processing between groups as evidenced by similar N400 responses. These data help to inform our understanding of attentional and perceptual speech processing in children with cochlear implants.