Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Investigating neural processing of syllabic and phonemic timescales information in spoken language

Poster C20 in Poster Session C, Friday, October 7, 10:15 am - 12:00 pm EDT, Millennium Hall

jeremy giroud1,2, Agnès Trébuchon-Dafonseca2,3, Manuel Mercier2, Matthew Davis1, Benjamin Morillon2; 1MRC Cognition and Brain Sciences Unit, University of Cambridge, UK, 2Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France, 3APHM, Clinical Neurophysiology, Timone Hospital, Marseille, France

Recent experimental and theoretical advances in neuroscience support the idea that both low and high frequency cortical oscillations play an important role in speech processing. One influential framework states that oscillatory activity in the auditory cortex aligns with the temporal structure of the acoustic speech signal in order to optimize sensory processing (Giraud and Poeppel, 2012). Within the theta range, this alignment supports the extraction of discrete syllabic units from a continuous stream of speech information. Higher frequency oscillations in the gamma range (25−40 Hz) parse temporally fine-grained phonological information (phonemic timescale). To date, however, experimental evidence has better established that theta activity in the auditory cortex tracks the syllabic rhythm during speech perception. The present work aims at testing the hypothesis that speech is sampled in parallel at both syllabic and phonemic timescales. To this end, a new behavioral paradigm was developed, in which spoken materials are selected so as to independent manipulation of the number of syllabic and phonemic units present in single French sentences. Intracranial recordings from ten epileptic patients with electrodes implanted primarily in auditory regions were acquired while patients listened to these sentences. For the set of speech stimuli selected, an analysis of multiple acoustic cues reveals that amplitude envelope modulations and spectral flux are good acoustic proxies for syllabic and phonemic timescales, respectively. Using cerebro-acoustic coherence analyses (Peelle et al., 2012), we show that theta neural activity tracks the speech envelope and allows decoding of syllabic rhythm - while low-gamma activity tracks spectral flux and allows decoding of the rate of phonemic units. These results were most pronounced within the first stages of the auditory cortical hierarchy. Finally, phase−amplitude coupling mechanisms were also scrutinized but did not allow the decoding of linguistic cues. Overall, our results support the hypothesis of parallel sampling of speech at syllabic and phonemic timescales, which occurs at the level of the auditory cortex. These findings open new avenues for the understanding of how the human brain transforms a continuous acoustic speech signal into discrete linguistic representations at a range of time scales.

Topic Areas: Perception: Auditory, Speech Perception