You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.

Poster C49, Friday, August 17, 10:30 am – 12:15 pm, Room 2000AB

Development of Spoken Language Comprehension in Children with Cochlear Implants: Data from a Passive Listening Task

David Corina1, Sharon Coffey-Corina1, Laurie Lawyer2, Kristina Backer1, Andrew Kessler3, Lee Miller1;1Center for Mind and Brain, University of California, Davis, 2University of Essex, United Kingdom, 3University of Washington, Seattle WA

Introduction. Deaf children who have received a cochlear implant(s) early in life often show gains in the development of spoken language, however great variability in language outcomes exist (Tobey et al., 2012). Determination of the developmental progression of language processing in these children has important clinical implications. In this study we used EEG technique to examine neurophysiological correlates of lexical processing in young children with cochlear implants and normally hearing controls. Method. Using a novel EEG paradigm we examined responses elicited during the passive listening of auditory sentences while children attended to unrelated silent cartoons. Twenty-eight children with cochlear implants (ages 2-8) and thirty normally hearing control children (ages 2-8) participated. Subjects heard 12 minutes of continuous speech presented over a speaker at approximately 65 dB(HG), 70dB (CI) SPL. Sentences were modified items from the Harvard sentence corpus. All speech was pitch-flattened to 82 Hz, and multiplexed with chirps in alternating frequency bands. This manipulation permits assessment of auditory brain stem responses (not discussed) in addition to lexical properties of speech [1]. Data was collected at 19 electrode sites and 2 mastoids. ICA was used to remove eye artifact and CI-artifact (EEGLAB v.14). Separate analyses examined responses to open versus closed class words and nouns versus verbs. Measurements of waveforms were taken for mean and peak amplitudes, and peak latency at 75-150 (P1-N1 time window) and 330-530ms (N4 time window). ANOVA was used to evaluate component amplitude and latency. Results. For hearing children, a significant effect of word class was observed between 330-530 ms. with open class words eliciting an expected N4 like response. (p < .001). Grammatical class also modulated responses. For hearing children nouns exhibited a greater negativity than verbs from 330-530 ms. (p < .003). In contrast, for children with cochlear implants, we observe a prominent N1 for closed class words and a reduced N4 for open class words. In addition, we observed a reduced noun-verb differentiation, with a prolonged latency. Additional analyses examined these patterns as a function of chronological age and time-in-sound (e.g. duration of CI use). Conclusion. The data indicate that linguistic properties of speech are detectable under passive (i.e. ambient) listening conditions. While hearing children showed expected wave form morphologies, deaf children with CI’s showed significantly reduced N4 amplitudes and delayed componentry. Time in sound measures suggest that ambient language processing in children is less effective in children with CI and may contribute to language delays.

Topic Area: Language Development