Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Mapping the timescales of language representations in the cerebellum using interpretable multi-timescale models

Poster C36 in Poster Session C, Friday, October 7, 10:15 am - 12:00 pm EDT, Millennium Hall

Amanda LeBel1, Shailee Jain2, Richard Ivry1, Alexander Huth2; 1University of California Berkeley, 2The University of Texas at Austin

Natural speech contains information at multiple timescales, ranging from spectrotemporal information on a sub-second scale to semantics and discourse over longer time periods. The human cerebral cortex hierarchically processes diverse temporal information, starting from the auditory cortex with a bias for short timescales to high-level regions like prefrontal cortex and precuneus which support increasingly long timescale representations. Previous work has shown that the cerebellum robustly responds during language processing and represents semantic information. While the cerebellum has been shown to play a central role in representing precise temporal relationships essential for motor control and perception, its temporal organization for language is not well understood. In this work, we built computational models to characterize processing timescales within the cerebellum during natural language processing. We collected fMRI data while participants (2 male, 3 female) listened to over five hours of naturally spoken, narrative English language stimuli. We then used voxel-wise encoding models of the cerebellum to predict the fMRI response of each voxel from features of the stimuli. These features were extracted from a multi-timescale recurrent neural network (MT-RNN). In this neural network, each unit integrates linguistic information at a fixed, distinct timescale. The range of timescales used in this model were derived from distributions of natural language. The MT-RNN encoding models significantly predicted cerebellar responses on a held-out dataset. We then used the regression weights on different MT-RNN units to estimate every voxel’s processing timescale. Similar to what is observed in the cortex, we found that the cerebellum represents information at several different temporal scales. Surprisingly, we also found a gradient of timescales across cerebellar depth such that superficial areas of the cerebellar folia represent longer timescale information and deeper regions of the folia represent shorter timescale information. We ruled out possible artifactual accounts of this temporal gradient across folia depth. First, we estimated the hemodynamic response function (HRF) of each cerebellar voxel by using a finite impulse response model and found no difference in the estimated HRF across depth in the cerebellum. Second, we collected resting state data using the same sequencing parameters and found no difference in the power spectrum across depth in the cerebellum. This suggests that the timescale variance across depth in the cerebellum is not simply due to frequency differences in the BOLD signal. We then examined the semantic selectivity of voxels across depth and found that they capture similar semantic concepts despite processing information at varying timescales. Overall, these results provide evidence of cerebellar involvement in temporal processing in a novel task domain, language, with an unexpected organization of semantic information over multiple timescales.

Topic Areas: Speech Perception, Computational Approaches