Slide Slam F11
When and when interactions during speech tracking
Sanne ten Oever1,2, Andrea Martin1,2; 1Max Planck Institute for Psycholinguistics, 2Donders Institute for Cognitive Neuroimaging
Speech tracking is benefited by pro-actively predicting the timing and content of speech. There are models explaining how the brain tracks temporal structure in speech (when) and models explaining how the brain computes which speech units are coming next (what). So far, these theoretical accounts have been living in relatively independent worlds, seemingly to imply that what and when can be treated as two independent processes. I will argue that this implicit assumption is incorrect. Firstly, temporal speech dynamics are dependent on speech content. Secondly, the efficiency of brain processing changes as a function of the predictability of speech content. When taking this what/when dependency into account isochronous oscillatory dynamics can track naturally-timed speech. I will demonstrate this with computational modelling as well as with experimental behavioral and neuronal data. The results reveal that speech tracking entails an interaction between oscillations and predictions flowing from internal language models.