Slide Slam O5
Neural responses while listening to rhythmically varied stories
Katsuaki Kojima1,2, Yulia Oganian3, Srishti Nayak4, Kylie Korsnack5, Miriam Lense4, Reyna Gordon4, Cyrille Magne6; 1University of Cincinnati, 2Cincinnati Children's Hospital, 3University of California San Francisco, 4Vanderbilt University Medical Center, 5University of Richmond, 6Middle Tennessee State University
Despite its many functions in service of communication efficacy, prosody is one of the most overlooked features of language. Prosodic cues such as speech rhythms (i.e., patterns of stressed and unstressed syllables) are also known to facilitate speech perception. Speech rhythms are represented in the speech envelope. The phase of cortical activity follows that of speech envelope, a phenomenon known as envelope tracking. Two complementary measures of neural phase locking characterize envelope tracking. (1) Cerebro-acoustic coherence (CAC) in theta-delta bands (1-10 Hz) provides a measure of neural responses across the time course of continuous speech; (2) Inter-event phase coherence (IEPC) is a measure of neural phase-locking time-locked to specific acoustic events in the speech signal. We have previously shown that IEPC over speech-related cortical areas increases following times of rapid increases in the speech envelope (peakRate events), which mark stressed vowel onsets. Here we examined whether neural engagement while listening to continuous speech differs based on its rhythmic structure. To this end, we examined three electrophysiological measures: CAC, as well as broadband evoked response potentials (ERPs), and IEPC time-locked to peakRate events. We compared these measures in response to metrically regular and non-regular stories. We also examined whether the magnitude of neural responses to acoustic edges can be predicted based on individual differences in participants’ musical rhythm perception skills. 26 neurologically healthy, native speakers of English (21 F) aged 18-22 yrs (M=18.8) participated. High-density EEG was recorded while participants listened to two 6-minute-long audio recordings of children’s stories, one with notably regular metrical structure, and the other with non-regular metrical structure. Participants also performed a musical rhythm discrimination test, which comprised of simple (strongly beat-based) and complex (syncopated) rhythm conditions. As predicted, we found evoked responses and increased IEPC in the theta-delta bands to acoustic edges in both conditions, consistent with previous results. Crucially, IEPC was higher for metrically regular sentences in low theta ( ~3 Hz), and for metrically irregular sentences in the delta range (~1 Hz). As with IEPC, higher CAC was found for metrically regular vs irregular sentences in the theta range (~4-5 Hz). At ~1 Hz, CAC was higher for metrically irregular vs metrically regular sentences. In addition, ERPs showed frontocentral negativity at 300-400ms to more metrically irregular speech, consistent with previous findings. Individual differences in complex musical rhythm sensitivity (measured behaviorally) were positively correlated with theta IEPC to metrically regular sentences, which are thought to mark syllabic stress. In summary, our results suggest neural engagement with metrically predictable syllable patterns such as in rhythmically regular speech. Neural engagement with temporal patterns is at slower time scales in non-regular speech. Further, results suggest that individual differences in sensitivity to a musical rhythm may modulate neural engagement with metrical patterns in speech. These results extend previous findings of the role of peakRate cues in the processing of acoustic edges by showing their importance at the level of metrical patterns in sentences and also align with literature linking individual differences in musical rhythm ability with speech rhythm processing.