Poster E9, Saturday, August 18, 3:00 – 4:45 pm, Room 2000AB
Using machine learning to model effects of attention and language-experience on neural phonemic constancy
Fernando Llanos1, Rachel Reetzke1, Zilong Xie1, Liberty Hamilton1, Bharath Chandrasekaran1,2,3,4,5;1Department of Communication Sciences and Disorders, The University of Texas at Austin, 2Institute for Mental Health Research, The University of Texas at Austin, 3Department of Psychology, The University of Texas at Austin, 4Department of Linguistics, The University of Texas at Austin, 5Institute for Neuroscience, The University of Texas at Austin
The acoustic realization of phonemes varies with phonetic context and language experience. Throughout native language experience, listeners develop phonemic representations that are perceptually constant across phonetic contexts (phonemic constancy; Nusbaum & Magnuson, 1997). However, phonemic processing is affected by attention and language experience (Hugdal et al., 2003; Best & Tyler, 2007). Phonemic constancy in a non-native speech context is challenged by the presence of phonemes that are not native or that are phonetically realized in a non-native way. Typically, reduced levels of attention and language-experience manifest in slower and less accurate behavioral responses (Munro & Derwin, 1995). Here, we focus on the impact of attention and language experience in neural phonemic processing. Inspired by research using electroencephalography (EEG) to investigate neural processing of continuous native speech (Khalighinejad et al., 2017), we introduce a machine-learning metric (neural phonemic constancy; NPC) that measures EEG variability across different phonetic realizations of the same phoneme. We examined the effects of attention and language experience in NPC. We recorded EEG responses from fifteen native speakers of English and late Chinese-English bilinguals while listening to 60 audio-tracks of a story recorded in English. Each speech track was mixed with a tone sequence with deviants that differed either in frequency or duration. Listeners were instructed to focus on speech (attended speech condition) or tone sequences (ignored speech condition). We measured attention to the story with comprehension questions at the end of each track. Preprocessed EEG responses were time-aligned to the onset of each phoneme over a 300-ms time window. We trained hidden Markov models to generalize stochastic prototypes of EEG responses to the same phoneme across multiple phonetic contexts. Then, we computed the distance between new EEG responses and their corresponding model prototypes using the posterior log-probability metric (Durbin et al., 1998). Higher log-probabilities indicate that the EEG responses are more prototypical of their phonemic class. To assess neural phonemic consistency across different EEG responses to the same phoneme, we averaged log-probabilities within phonemes in individual participants and conditions (NPC scores). A linear mixed-effects model of NPC scores with language (English/Chinese) and condition (attended/ignored) as fixed effects, and subject and phoneme as random effects, revealed effects of language (English>Chinese; t=4.42, p<0.001), condition (attended>ignored; t=6.64, p<0.001), and language-by-condition (t=5.16, p<0.001). These results show that NPC can capture subtle differences in language- and attentional-driven plasticity. Post-hoc Tukey analyses showed that English NPC scores were higher than Chinese scores in the attended speech condition (p=0.0012). This result is consistent with cross-language patterns of correct responses to speech comprehension questions in the attended speech condition. NPC scores were also higher when speech was attended, but only for English listeners (p<0.001). This result could be due to a reduction of attentional resources in non-native speech. Finally, English NPC scores were higher than Chinese scores across the English phonemes that are not contrastive in Chinese (t, p=0.001), but not across the English phonemes that are contrastive in Chinese. This result indicates that the effects of language experience in neural processing are phoneme-specific.
Topic Area: Perception: Speech Perception and Audiovisual Integration