My Account

Poster C71, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

Listening experience and syntactic complexity modulate neural networks for speech and language: A functional near-infrared spectroscopy study

Bradley White1, Clifton Langdon1;1Gallaudet University

INTRODUCTION: A large body of work suggests that experience listening to acoustically-degraded speech modulates how the brain perceives and processes speech and language information, ultimately shaping the brain’s structure and function (White, Berger, & Langdon, 2019; White & Langdon 2018a-c; White, Kushalnagar, & Langdon 2018; Alain et al., 2018; Peelle, 2018; Mattys et al., 2012; McKay et al., 2016; Werker & Hensch, 2015). Investigations of long-term, experienced hearing aid (HA) and cochlear implant (CI) users further suggests that the type of developmental listening experience may also predict how the brain functions at rest (White, Berger, & Langdon, 2019; McKay et al., 2016). However, the effects of early, life-long exposure to acoustically-degraded speech on active speech and language networks remains elusive, especially in the presence of increased cognitive demand. We assess this relationship and hypothesize that the duration (naïve v. experienced) and type (narrow- v. wide-band) of auditory experience modulates speech and language networks differently for simple compared to complex conditions, possibly indicative of developmental neurocognitive changes in early, life-long HA and CI users. METHODOLOGY: Participants. Right-handed, healthy, monolingual, young adult typically-hearing listeners (TH, N=15), HA users (N=6), and CI users (N=4). HA and CI participants received and began using their devices before age 5 and have good speech perception. Procedures. Participants completed a battery of language and cognitive assessments. The fNIRS task presented participants with 192 sentences for a plausibility judgment task, during which we recorded cortical hemodynamic activity from 50 channels spanning the frontal and bilateral temporal-parietal cortices. The sentences varied linguistically (i.e., simple subject-relative and complex object-relative clause structures) for all participants and, for TH participants only, acoustically (i.e., clear, HA simulated, and CI simulated speech). We performed task-based functional connectivity (tbFC) analysis to assess the role of auditory experience and cognitive demand on active language processing networks. All analyses were thresholded at pFDR<0.05 or more stringent. RESULTS: (A) Narrow-band speech: Naïve and experienced listeners of narrow-band speech exhibit little difference in neural network patterns for simple and complex grammar. (B) Wide-band speech: Naïve and experienced listeners of wide-band speech exhibit significantly dissimilar neural network patterns for simple grammar. For complex grammar, preliminary behavioral and tbFC results suggest that HA users are disengaging from the task and performing independent thinking unrelated to the task. (C) Clear speech: Experienced TH listeners of clear speech exhibit different neural network patterns for simple and complex grammar conditions than HA and CI users and all simulated listening conditions. CONCLUSION: Observed tbFC results for naïve and experienced listeners of narrow-band speech are corroborated by previous findings of functional hemodynamic activation in TH listeners and CI users (White, Kushalnagar, & Langdon 2018), suggesting that naïve and experienced listeners of narrow-band speech utilize similar neurocognitive mechanisms. Comparisons to the HA users, however, suggest that their listening experiences can result in distinct modulations of the speech and language processing networks despite performing similarly on standardize speech perception assessments. Taken together, these findings advance our understanding of neuroplasticity and resilience for speech and language perception.

Themes: Speech Perception, Perception: Auditory
Method: Functional Imaging

Back