Windows into Language: Benefits and Challenges of Combining Methods
Thursday, August 22, 2019, 1:45 – 3:30 pm, Finlandia Hall
This symposium brings together researchers studying the neurobiology of language from different methodological perspectives. Talks will be followed by a panel discussion of the benefits and challenges of combining methods to study the neurobiology of language.
Chair and Discussant:
Sonja A. Kotz is a translational cognitive neuroscientist, who investigates predictive coding and cognitive control in speech, language, and communication in healthy and patient populations. She utilizes a wide range of behavioral and neuroimaging methods (M/EEG, EEG-oscillations, and functional and structural magnetic resonance imaging). She heads the section of neuropsychology at Maastricht University, the Netherlands and holds several honorary professorships (Manchester, Glasgow, Leipzig, and Lisbon).
Evelina (Ev) Fedorenko
Language is a remarkable system for expressing intricate ideas, unparalleled by any other animal communication system. This feature of language makes it both the holy grail of research on human cognition, and one of the most challenging pursuits, due to the lack of animal models. The only solution to the latter is to use the rich arsenal of tools from cognitive science, neuroscience, and computer science in the hope of getting robust converging answers about the functional architecture of language. I will use the much debated question of the relationship between lexico-semantic and syntactic representations and processes to illustrate how fMRI, ECoG, and behavioral data from neurotypical adults and patients with brain lesions can sometimes paint a remarkably clear and consistent answer — in this case, a tight integration between lexical semantics and syntax. This answer further aligns with many current theoretical linguistic frameworks. I will also briefly talk about how advances in machine learning can help shed light on this and similar architectural questions, and perhaps bring us closer to computationally precise models of different linguistic processes.
About Ev Fedorenko
Ev Fedorenko is a cognitive neuroscientist who specializes in the study of the human language system. She received her Bachelor’s degree in Psychology and Linguistics from Harvard University in 2002. She then proceeded to pursue graduate studies in cognitive science and neuroscience at MIT. After receiving her Ph.D. in 2007, she was awarded a K99R00 career development award from NICHD and stayed on as a postdoctoral researcher and then a research scientist at MIT. In 2014, she joined the faculty at HMS/MGH. Fedorenko aims to understand the computations we perform and the representations we build during language processing, and to provide a detailed characterization of the brain regions underlying these computations and representations. She uses an array of methods, including fMRI, ERPs, MEG, intracranial recordings and stimulation, and tools from Natural Language Processing, and works with diverse populations, including healthy children and adults, as well as individuals with developmental and acquired brain disorders.
Aalto University, Finland
By now, we know what kind of MEG or fMRI activation patterns to expect in basic language paradigms, such as spoken or written word perception and picture naming. Based on this groundwork, it has been possible to address neural correlates of language development, learning and disorders, and even begin to elucidate brain organization of meaning and knowledge. However, the choice of imaging measures can importantly influence the way we interpret brain function. MEG evoked responses, oscillatory power and real-time connectivity, as well as fMRI activation and slow haemodynamic interareal correlations afford complementary views to language processing. I will discuss findings from experiments where both MEG and fMRI data were recorded from the same individuals, using the same exact language paradigms. Those studies have demonstrated similarities but also highlighted distinct functional sensitivities of different neuroimaging proxies. Together, the various MEG and fMRI measures promise rich possibilities to multiview imaging that can reach beyond mere combination of location and timing of neural activation and help to uncover the organizational principles of language function in the human brain.
About Riitta Salmelin
Riitta Salmelin is Professor of Imaging Neuroscience at the Department of Neuroscience and Biomedical Engineering, Aalto University. Her research focuses on two complementary lines of investigation: uncovering neural organization of human language function by use and development of imaging methods and computational modelling, and examining sensitivity of MEG and fMRI activation and network measures to different neural and cognitive processes. She has pioneered the use of MEG in language research, and applied multimodal MEG/fMRI and interareal connectivity in the study of human cognition. She is the senior editor of the first handbook on MEG (“MEG. An Introduction to Methods”, Oxford University Press, 2010) and Associate Editor of Human Brain Mapping. Honours include membership of the Academia Europaea, Wiley Young Investigator Award by the Organization for Human Brain Mapping, and the Justine and Yves Sergent Award.
Professor of Cognitive Neuroscience
University of Oxford
There are many tools available for studying the neurobiology of language. Traditionally, measures such as fMRI, MEG and EEG provided only correlational information regarding where or when a brain area was activated during a task. Causal inference was reserved typically for studies of patients with brain lesions or those using brain stimulation. Both sets of methods have provided valuable insights into the neurobiology of language. In this talk, I will provide examples of how these tools can be usefully combined. For example, data obtained using different imaging modalities in the same participants can constrain analyses and provide confirmatory evidence of abnormality affecting both structure and function in developmental disorders. By combining interference and measurement tools we can ask questions about the causal role of brain areas or explore interactions between them and their connectivity. I will demonstrate how we have done this in studies of speech perception using TMS to temporarily perturb brain function and then EEG or MEG to measure the effects.
About Kate Watkins
Kate Watkins is a Cognitive Neuroscientist in the Department of Experimental Psychology at the University of Oxford. She is a Fellow of St. Anne’s College in Oxford, where she teaches Psychology. Kate trained in neuropsychology and neuroimaging at the Institute of Child Health where she did her PhD studying the members of the KE family who have a mutation in FOXP2. She did a postdoc at the Montreal Neurological Institute with Tomas Paus where she learned to use non-invasive brain stimulation to study the motor system in speech perception. Kate returned to the UK, to Oxford, working at the FMRIB Centre initially and then in Experimental Psychology where she established the Speech and Brain Research Group. The group uses brain imaging and brain stimulation to study children and adults with and without disorders affecting speech and language. Current studies in the lab involve using brain stimulation to enhance fluency in people who stutter, brain imaging of children with developmental language disorder, brain stimulation to interfere with or enhance speech motor learning, and using imaging to map the laryngeal motor cortex. Kate has also looked at plasticity for auditory and language functions in people who are congenitally blind.