Slide Slam

< Slide Slam Sessions

Slide Slam J16 Sandbox Series

Cortical tracking of speech envelope and phonetic information in children with cochlear implants and hearing children.

Slide Slam Session J, Wednesday, October 6, 2021, 5:30 - 7:30 pm PDT Log In to set Timezone

Tatiana Matyushkina1, Sharon Coffey-Corina1, Lee Miller1, David Corina1; 1UC Davis Center for Mind and Brain

Cochlear implantation for the treatment of congenital deafness has been highly successful, however the outcomes for speech perception remain highly variable (Tobey et al 2012). A long-standing issue in the field has been how best to assess the quality and content of speech percepts that children with cochlear implants experience and how the nature of the signal may change under conditions of multi-modal (auditory and visual) stimulation. We are using predictive modeling to quantify the relationship between features of speech stream and the EEG signal. We make use of a novel electrophysiological paradigm (Backer et al. 2020) to obtain EEG data during children’s perception of continuous speech with interposed visual stimulation. In the study the participants (CI N = 7, mean age 6y, mean age of implantation 17m; hearing control N = 7 (mean age 6y 5m)) were instructed to watch a silent cartoon presented in the middle on the screen. Around the cartoon two concentric checkered rings in the background flickering at different frequencies (7.5 and 12 Hz). Auditory stimuli consisted of 49 unique sentences (sampled at 22,050 Hz) from the Harvard/IEEE Corpus (IEEE 1969) that were concatenated into a 2-minute long WAV file of continuous speech. Periods of ambient speech occurred in the presence and absence of the visual flicker stimulation. The EEG data then were filtered between 1 and 15 Hz and ICA was performed to remove ocular and CI-induced artifacts. After that we used mTRF toolbox (Crosse et al., 2015) to create encoding models. The mTRF method involves fitting a temporal response function that describes a mapping between features of sensory stimulus like speech envelope and the EEG signal (Di Liberto et al., 2015). Using this method, we hope to compare cortical tracking of a low-level speech feature (envelope) as well as phonetic information (such as manner of articulation, place of articulation etc.) in the presence of distracting visual stimuli in hearing and CI children. To the best of our knowledge this method has not been applied to pediatric populations with cochlear implants. Preliminary data analysis has shown higher correlation scores between reconstructed and original envelope in CI group compared to hearing group (r = 0.16, p < .0001 for CI group, r = 0.09, p < .0001 for hearing group). This might reflect better envelope tracking in CI group as it has been shown that individuals with hearing impairments show enhanced envelope tracking when compared to hearing counterparts (Decruy et al., 2020). Previous studies have shown lower phoneme discrimination accuracy in CI children (Bouton et al. 2012), thus we predict to see worse performance of our phonetic feature model in CI children than in hearing children. This may indicate that children with CI rely more on the broader context and less on the phonetic features during speech recognition.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News