Anne-Lise Giraud

Speaker: Anne-Lise Giraud, Director, Auditory Language Group, Université de Genève

Saturday, August 20, 2016, 9:00 – 10:00 am, Logan Hall

Chair: Lorraine K. Tyler, University of Cambridge

Anne-Lise was born in Lyon and lived there until she obtained her PhD in Neuroscience in 1997 on peripheral auditory neurophysiology. She did a post-doc at the Functional Imaging Lab in London between 1997 and 1999, where she studied the plasticity of auditory and visual systems during deafness and after cochlear implantation, using mainly positron emission tomography (PET) and fMRI. In 2001, Anne-Lise founded the Auditory Language Group, at the Brain Imaging Center in Frankfurt/Main Germany, where she worked on multisensory integration in speech processing.

The group survived a first move in 2004 to the Cognitive Neuroscience Lab of the DEC at Ecole Normale Supérieure Paris, where Anne-Lise took a CNRS research director’s position, and a second one in 2012 to the Department of Neuroscience of the University of Geneva, where she is being appointed Director of Neuroscience. Anne-Lise is interested in the transformations of the neural code between cochlear speech encoding and access to meaning. She is particularly dedicated to using basic science to understand the causes of speech and language disorders.

Modelling neuronal oscillations to understand language neurodevelopmental disorders

Perception of connected speech relies on accurate syllabic segmentation and phonemic encoding. These processes are essential because they determine the building blocks that we can manipulate mentally to understand and produce speech. Segmentation and encoding might be underpinned by specific interactions between the acoustic rhythms of speech and coupled neural oscillations in the theta and low-gamma band. To address how neural oscillations interact with speech, we used a neurocomputational model of speech processing generating biophysically plausible coupled theta and gamma oscillations. We show that speech could be well decoded from the artificial network’s low-gamma activity, when the phase of theta activity was taken into account. Based on this model we then asked what could happen to speech perception if different parts of the network were disrupted. We postulated that if low-gamma oscillations were shifted in frequency speech perception would still be possible, but phonemic units within syllables would have different format. Phonemic format anomalies could thus cause difficulties to map idiosyncratic phonemic representations, with universal ones, as those we are taught to become aware of when learning to read. A disruption of the auditory gamma oscillation could hence account for some aspects of the phonological deficit in dyslexia. Using MEG, and EEG combined with fMRI, we observed that dyslexia was associated with faster gamma activity in auditory cortex, and we found that this anomaly could explain several facets of the dyslexia phenotype. We also found that the theta/gamma coupling was preserved despite abnormal gamma frequency. Using a similar approach we reasoned that a disruption of the theta auditory network would likely cause more serious speech perception difficulties, such as perhaps those observed in autism spectrum disorders, as syllabic segmentation would also be altered. Using EEG combined with fMRI, we found that both theta activity and the coupling of auditory and gamma oscillations was profoundly abnormal in autism; this anomaly selectively predicted the severity of verbal impairment in autism. These data suggest that speech and language difficulties in dyslexia and autism can be brought together in a common theoretical framework involving the functional coupling between neural oscillations.