Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Neural entrainment reveals simultaneous, implicit learning of adjacent word and non-adjacent phrasal structure

Poster D70 in Poster Session D with Social Hour, Friday, October 7, 5:30 - 7:15 pm EDT, Millennium Hall

Ivonne Weyers1,2, Britta Walkenhorst2, Katja Nettermann2, Öykü Bulca2, Thomas Gruber2, Jutta L. Mueller1,2; 1University of Vienna, Austria, 2University of Osnabrueck, Germany

Recently, a number of studies using EEG have shown that cortical activity recorded during continuous speech perception tracks linguistic structure at different hierarchical levels, namely syllables (e.g., Batterink, 2020; Batterink & Paller, 2017; Buiatti, Peña, & Dehaene-Lambertz, 2009), words (e.g., Batterink, 2020; Batterink & Paller, 2017), phrases (Getz, Ding, Newport, & Poeppel, 2018) and even sentences (Ding, Melloni, Zhang, Tian, & Poeppel, 2016). Many of these studies have shown that often, mere distributional information, specifically transitional probabilities between adjacent syllables (Batterink & Paller, 2017) or categories of non-words (Getz et al., 2018), suffices as a cue to evoke such rapid neural entrainment to the frequency of the initiating stimulus. Although speech is a serial acoustic signal, the hierarchical structure of human language requires the listener to track syntactic constituency beyond such an immediately adjacent local context, however. Buiatti et al. (2009) investigated neural tracking of non-adjacent dependencies at the word level (e.g., puXki) and found that power peaks at the word frequency were only visible when pauses additionally highlighted word boundaries. In the present study, we investigated whether neural entrainment to linguistic structure would reflect simultaneous tracking of both local and non-local structure in a hierarchical manner. We created structured streams of syllables, in which adjacent syllables formed bisyllabic non-words (e.g., pelo), which in turn formed non-adjacent phrase-like units (e.g., pelonefutoba). There were no pauses or other prosodic cues that highlighted word or phrase boundaries, so that the only cue to constituency was the available distributional information. Adults (N=23) actively listened to four blocks of structured sequences and four blocks of random syllable sequences while their EEG was recorded. In a grammaticality judgment task alternating with the exposure blocks, participants evaluated individually presented phrase exemplars. Whereas behavioral performance largely remained at chance in the structured condition (m=.55, SD=.15), analysis of steady-state auditory evoked potentials (SSAEPs) revealed significant differences in the cortical response between structured and random condition, with significantly higher neural entrainment at the word (1.6 Hz) and phrasal level (0.53 Hz) frequencies in the structured condition. In addition, a significant increase in power at both of these frequencies between the first and second learning phase of the structured condition suggests a learning effect for both words and phrases. To our knowledge, the present study is the first to show neural entrainment to adjacent word-level and non-adjacent phrasal-level structures simultaneously, and based on mere distributional information without any top-down language knowledge (Ding et al., 2016) or additional prosodic cues (Buiatti et al., 2009). These results provide further evidence for highly automatic, stimulus-bound hierarchical structure building operations involved in language comprehension.

Topic Areas: Speech Perception, Syntax

SNL Account Login

Forgot Password?
Create an Account