Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Early neural encoding of acoustic-phonetic information is consistent across language ability

Poster B72 in Poster Session B, Tuesday, October 24, 3:30 - 5:15 pm CEST, Espace Vieux-Port

Nikole Giovannone1, Shawn Cummings1, Adrián García-Sierra1, James Magnuson1,2, Rachel Theodore1; 1University of Connecticut, 2Basque Center on Cognition, Brain, and Language

Speech is a complex signal. Even at the level of individual speech sounds (like the /g/ in “goal”), there is wide acoustic-phonetic variability across productions. In general, listeners are remarkably sensitive to this variability. The N100 ERP component reflects this sensitivity. Specifically, the amplitude of the N100 is graded with respect to voice-onset-time (VOT, a temporal cue for identifying stop consonants) such that the N100 amplitude linearly increases as VOT decreases. Individual differences in sensitivity to acoustic-phonetic information have been observed in behavior. For example, some listeners with language impairment (e.g., developmental language disorder, specific language impairment) show weaknesses in speech sound identification and discrimination, and some theories posit that these deficits may underlie the linguistic deficits associated with language impairment. However, an open question in this domain is whether individuals with weaker language ability (characteristic of language impairment) show differences in acoustic-phonetic cue encoding at the neural level. Therefore, the goal of this experiment was to examine the relationship between language ability and early neural encoding of acoustic-phonetic information. We used the N100 ERP component to investigate whether listeners with weaker language ability demonstrate diminished sensitivity in their encoding of VOT relative to listeners with stronger language ability. Listeners (n = 77) completed a battery of standardized assessments to measure their language ability, as well as a phonetic identification task while EEG was recorded. In the phonetic identification task, listeners categorized items from 9-step minimal pair continua (e.g., “goal”-“coal”, “gain”-“cane”). Our findings revealed a strong effect of VOT on the N100 component: as VOT increased, N100 amplitude decreased, consistent with prior research. However, there was no evidence to suggest that the encoding of VOT in the N100 varies with language ability. Moreover, Bayes Factor analyses provided moderate evidence for the null hypothesis (i.e., that there is no difference in the early neural encoding of VOT as a function of language ability), suggesting that listeners with weaker language ability may not show reduced sensitivity to acoustic-phonetic information during early neural encoding. In summary, this study provides insights into the neural processing of acoustic-phonetic cues in individuals with varying language abilities. While listeners overall demonstrated the expected sensitivity to VOT in the N100 component, no differences were observed in the neural encoding of VOT based on language ability. These findings contribute to a better understanding of the relationship between language ability and acoustic-phonetic processing, indicating that reduced sensitivity at the neural level may not underlie the linguistic deficits observed in populations with language impairment.

Topic Areas: Speech Perception, Disorders: Developmental

SNL Account Login

Forgot Password?
Create an Account

News