Slide Sessions

< Slide Sessions

Slide Session D

Friday, October 8, 2021, 8:00 - 9:30 am PDT Log In to set Timezone

Neural tracking of acoustic, lexical, and semantic information in attended and ignored speech

Malte Wöstmann1,2, Frauke Kraus1,2, Lea-Maria Schmitt1,2, Jonas Obleser1,2; 1Department of Psychology, University of Lübeck, Lübeck, Germany, 2Center of Brain, Behaviour, and Metabolism, University of Lübeck, Lübeck, Germany

When we listen to continuous speech, low-frequency electrophysiological brain responses phase-lock to the acoustic envelope. This so-called speech tracking is relatively lowered when attention is directed away or when the acoustic signal is degraded. Recently, demonstrations of such neural tracking mechanisms have been extended to non-acoustic, linguistic features (e.g., lexical segmentation at word onsets; word frequency; semantic dissimilarity). However, it is not clear how the tracking of linguistic features is affected by selective attention and acoustic degradation. Here, we employed electroencephalography (EEG) recordings from N = 22 participants who were presented with an acoustically degraded spoken-language track to one ear (using 8-band noise vocoding) and another, acoustically intact spoken-language track to the other ear. The task instruction to attend either to the left or right ear input varied on a trial-by-trial basis. Using linear regularised forward models in the EEG, we first estimated the temporal response function (TRF) to the acoustic envelope, word onsets, word frequency, and semantic dissimilarity (quantified as the Euclidian distance of vectors representing the meaning of the previous and current word). Second, to assess the encoding fidelity of acoustic and linguistic information, we quantified encoding accuracy by calculating the correlation of the actual EEG response with the EEG response reconstructed by the TRF model. We tested the additional value for encoding accuracy when individual linguistic features were added to the TRF model. At shorter latencies (< 300 ms), the acoustic-envelope tracking response was amplified for attended versus ignored speech. At longer latencies (> 300 ms), linguistic features elicited a pronounced tracking response with a negative deflection around 400 ms after word onset over parietal regions, similar to the N400. Notably, segmental information (i.e., word onsets) enhanced the encoding accuracy for attended speech irrespective of degradation. Word frequency enhanced the encoding accuracy only at longer latencies and only for attended intact speech. Semantic dissimilarity enhanced the encoding accuracy at longer latencies for attended speech, irrespective of degradation. The present results demonstrate that in addition to the acoustic envelope, the human brain tracks certain linguistic features of speech inside the focus of attention, primarily for intact but also for acoustically degraded speech. Outside the focus of attention, however, no statistically robust tracking of linguistic features was discernible in the present study. In sum, later neural speech tracking responses, which have proven difficult to model and to interpret in the literature, are interactively shaped by attention and linguistic information.

Neurochemical changes of healthy ageing on semantic representation

JeYoung Jung1; 1School of Psychology, University of Nottingham, UK

There is now considerable convergent evidence from multiple methodologies and clinical studies that the human anterior temporal lobe (ATL) is a semantic representational hub. Previously, we demonstrated that the regional GABA concentration in the ATL predicts human semantic processing and GABAerigc action in the ATL is crucial to the neurobiological contribution of semantic representation in young individuals. It has been argued that age-related changes in the neurochemical properties of the GABAergic system may underlie cognitive decline in older adults. However, age-related neurochemical changes in the ATL semantic function is not clear. Here, we combined functional magnetic resonance imaging (fMRI) with resting-state magnetic resonance spectroscopy (MRS) to measure task-related BOLD signal changes and GABA levels in the left ATL and the vertex as a control region. Participants performed a semantic association task and a pattern matching task (control task) during fMRI. Data were collected from 23 young (aged 19-29 years old) and 28 older (aged 60-90 years old) healthy adults. Our results demonstrated that older as compared to younger adults exhibited a reduced GABA levels in the ATL, as well as a decreased semantic task performance (slower reaction time: RT). In older adults, task-induced regional activity in the ATL was decreased compared to young adults. Importantly, the degree of task-related BOLD signal changes in the ATL was associated with semantic task performance. Older adults with stronger regional ATL activity performed the semantic task better with faster RT. In addition, semantic control regions including the inferior frontal gyrus (IFG) and posterior middle temporal gyrus (pMTG) revealed the reduced task-induced regional activity in the older adults relative to the young adults. In order to evaluate a network-level changes in the older adults, we performed functional connectivity (FC) analysis between the key semantic regions. We found a reduced FC between the IFG and pMTG in the older adults compared to young adults. Importantly, the degree of FC between the semantic regions was associated with semantic task performance. Individuals with stronger ATL-interhemispheric connectivity and the IFG-pMTG connectivity performed the semantic task better (increased accuracy and faster RT). Our combined fMRI and MRS investigation demonstrated that age-related GABA changes in the ATL and neural changes in the semantic system are crucial to semantic task performance in healthy older adults. This study contributes to a comprehensive understanding of how age-related differences in neurochemical processes may underlie cognitive decline in semantic representation.

Narrative Event Segmentation in the Cortical Reservoir

Peter Ford Dominey1; 1INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, 2Robot Cognition Laboratory, Institute Marey, Dijon

During continuous perception of movies or stories, awake humans display cortical activity patterns that reveal hierarchical segmentation of event structure. Sensory areas like auditory cortex display high frequency segmentation related to the stimulus, while semantic areas like posterior middle cortex display a lower frequency segmentation related to transitions between events (Baldassano et al. 2017). These hierarchical levels of segmentation are associated with different time constants for processing. Chien and Honey (2020) observed that when two groups of participants heard the same sentence in a narrative, preceded by different contexts, neural responses for the groups were initially different and then gradually aligned. The time constant for alignment followed the segmentation hierarchy: sensory cortices aligned most quickly, followed by mid-level regions, while some higher-order cortical regions took more than 10 seconds to align. These hierarchical segmentation phenomena can be considered in the context of processing related to comprehension. Uchida et al. (2021) recently described a model of discourse comprehension where word meanings are modeled by a language model pre-trained on a billion word corpus (Yamada et al 2020). During discourse comprehension, word meanings (Wikipedia2vec embeddings) are continuously integrated in a recurrent cortical network – the Narrative Integration Reservoir. The model demonstrates novel discourse and inference processing, in part because of two fundamental characteristics: real-world event semantics are represented in the word embeddings, and these are integrated in a reservoir network which has an inherent gradient of functional time constants due to the recurrent connections. Here we demonstrate how this model displays hierarchical narrative event segmentation properties. The reservoir produces activation patterns that are segmented by the HMM of Baldassano et al (2017) in a manner that is comparable to that of humans. Reservoir neurons can be partitioned into virtual cortical “areas” based on their distribution of time constants due to the network recurrence. Context construction displays a continuum of time constants across these areas, while context forgetting has a fixed time constant across these areas. Virtual areas formed by subgroups of reservoir neurons with faster time constants segmented with shorter events, while those with longer time constants preferred longer events. A linear integrator could produce similar results for segmentation, and context construction and forgetting, but did not reproduce the distribution of time constants in the cortical hierarchy. This neurocomputational recurrent neural network simulates narrative event processing as revealed by the fMRI event segmentation algorithm of Baldassano et al (2017), and provides a novel explanation of the asymmetry in narrative forgetting and construction observed by Chien and Honey (2020), and proposes a natural explanation for the cortical hierarchy of narrative event segmentation time constants. The model extends the characterization of online integration processes in discourse to more extended narrative, and demonstrates how reservoir computing provides a useful model of cortical processing of narrative structure. Research Supported by the Region Bourgogne-Franche-Comte ANER Robotself

Complementary hemispheric lateralization of language and social processing in the human brain

Reza Rajimehr1, Arsalan Firoozi2, Hossein Rafipoor3, Nooshin Abbasi4, John Duncan1,3; 1University of Cambridge, 2Sharif University of Technology, 3University of Oxford, 4McGill University

Humans have a unique ability to use language for social communication. The neural architecture for language comprehension and production may have emerged in the brain areas that were originally involved in social cognition. Here we directly tested the fundamental link between language and social processing using functional MRI data from over 1000 human subjects. Cortical activations in language and social tasks showed a striking similarity with a complementary hemispheric lateralization; within core language areas, the activations were left-lateralized in the language task and right-lateralized in the social task. An opposite pattern was observed in a minority group of subjects who had strong language activations in the right hemisphere. The fine-grained organization of lateralization effects revealed interesting details. In many areas, distinct subregions showed lateralization for either language or social processing. However, in one area located in posterior superior temporal sulcus (STSp), these subregions highly overlapped, suggesting a competition between language and social processing for shared neural resources. Consistent with this topography, we found a correlation between the magnitude of language lateralization in left STSp and the magnitude of social lateralization in right STSp. The lateralization effects in STSp and other core areas of lateral temporal cortex predicted performance in the language and social tasks. Outside the language network and within regions in prefrontal cortex, there was a left-hemisphere dominance for both language and social activations, perhaps indicating multimodal integration of social and communicative information. Our findings provide new insights about how the homotopic areas of two hemispheres are complementarily involved in language and social processing. Such complementary hemispheric lateralization might be impaired in autism spectrum disorder.

< Slide Sessions

SNL Account Login

Forgot Password?
Create an Account

News