Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Progression of acoustic, phonemic, lexical and sentential neural features emerge for different speech listening

Poster B33 in Poster Session B and Reception, Thursday, October 6, 6:30 - 8:30 pm EDT, Millennium Hall

I.M Dushyanthi Karunathilake1, Christian Brodbeck2, Shohini Bhattasali1, Philip Resnik1, Jonathan Z. Simon1; 1University of Maryland College Park, 2University of Connecticut

Understanding speech requires analyzing acoustic waveforms via intermediate abstract representations including, phonemes, words, and ultimately meaning along with other cognitive operations. Recent neurophysiological studies have reported that the brain tracks acoustic and linguistically meaningful units. However, since the speech representation units are usually correlated with each other, and often a small subset of features are analyzed, it is unclear whether these neural tracking accounts for uncaptured variance that has not been modeled, hence, causing the feature responses to be less accurate. Additionally, the way these feature responses are modulated by top-down mechanisms and speech comprehension is not well understood. To address these limitations, we recorded magnetoencephalography (MEG) data from 30 healthy, younger participants while they listened to four types of continuous speech-like passages: speech-envelope modulated noise, narrated English-like non-words, word-scrambled narrative, and true narrative. Using multivariate temporal response function (mTRF) analysis, we show that the cortical response time-locks to emergent features from acoustics to linguistic processes at the sentential level as incremental steps in the processing of speech input occur. Our results show that when the stimulus is unintelligible, the cortical response time-locks only to acoustic features, whereas for intelligible speech, the cortical response time-locks to both acoustic and linguistics features. For the case of narrated non-words, phoneme-based lexical uncertainty generates less activation than for true words, suggesting a lack of predictive coding error. Temporal analysis shows that the non-word onsets do generate smaller early responses than word onsets, but they also generate stronger late responses than word onsets suggesting different neural mechanisms associated with accessing lexico-semantic memory traces. For the scrambled word passages, we find additional responses based on context-independent (unigram) word surprisal, but for true narrative, the responses are additionally driven by context-based word surprisal. The unigram word surprisal responses show strong late peaks for the scrambled word passage, consistent with an N400-like response. The results also show that most language-dependent time-locked responses are left lateralized, whereas lower-level acoustic feature responses are right lateralized or strongly bi-lateral. Taken together, our results show that brain responses to certain linguistic units are influenced by the speech content, the level of processing and speech features that could be attributed to evaluate perception and comprehension.

Topic Areas: Perception: Auditory, Speech Perception