Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Neural processing of syllabic and phonemic timescales information in spoken language

Poster D72 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port

jeremy giroud1, Agnès Trébuchon2,3, Manuel Mercier3, Matthew Davis1, Benjamin Morillon3; 1MRC Cognition and Brain Sciences Unit, University of Cambridge, UK, 2Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France, 3APHM, Clinical Neurophysiology, Timone Hospital, Marseille, France

Recent experimental and theoretical advances in neuroscience support the idea that both low and high frequency cortical oscillations play an important role in speech processing. One influential framework proposes that oscillatory activity in the auditory cortex aligns with the temporal structure of the acoustic speech signal in order to optimize sensory processing (Giraud and Poeppel, 2012): Within the theta range, this alignment supports the extraction of discrete syllabic units from a continuous stream of speech information. Higher frequency oscillations in the gamma range (25−40 Hz) parse temporally fine-grained phonological information (phonemic timescale). To date, however, experimental evidence has only established that theta activity in the auditory cortex tracks the syllabic rhythm during speech perception. Neural representations of phoneme level information is seen in high-gamma power measured with electrocorticography methods (Mesgarani et al., 2014) but the temporal properties of this activity, and the degree of phase-locking to the speech signal remains unclear. The present work aims at testing the hypothesis that speech is sampled in parallel at both syllabic and phonemic timescales. To this end, a new behavioural paradigm was developed, in which spoken materials were selected so as to independently manipulate the number of syllables and phonemes present in a set of French sentences. An acoustic analysis exploring a wide variety of acoustic features reveals that amplitude envelope modulations and spectral flux are good acoustic proxies of syllabic and phonemic timescales, respectively. Intracranial neural recordings (sEEG) from ten epileptic patients with electrodes implanted primarily in auditory regions were acquired while patients listened to these sentences. Using cerebro-acoustic coherence analyses (Peelle et al., 2012), we show that theta neural activity (3 to 9 Hz) significantly tracks the speech envelope and allows decoding of syllabic rate. In contrast, low-gamma activity is most coherent with the spectral flux and allows decoding of the rate of phonemic units. These results are most pronounced within the first stages of the auditory cortical hierarchy (bilateral Heschl’s Gyri). We also evaluated coupling between the phase of the speech envelope and the amplitude of neural oscillations in the auditory regions. We find that theta-gamma phase-amplitude coupling tracks the syllabic rate. Overall, our results support the hypothesis of parallel sampling of speech at syllabic and phonemic timescales, which occurs at the level of the auditory cortex. However, these timescales operate in synchrony with complementary auditory information, respectively the envelope and the spectral flux. These findings open new avenues for understanding of how the human brain transforms a continuous acoustic speech signal into discrete linguistic representations across linguistic timescales.

Topic Areas: Speech Perception,

SNL Account Login

Forgot Password?
Create an Account

News