Slide Slam I15
Short animated movies elicit text-selective neural responses in pre-reading children
Iliana I. Karipidis1, Emily Kubota1, Sendy Caffarra1,2, Maya Yablonski1, Jason D. Yeatman1; 1Stanford University, 2Basque Center on Cognition, Brain and Language
Studying the neurobiology of language in preschool children with functional magnetic resonance imaging (fMRI) is challenging; head motion causes severe signal artifacts and young children often get distracted during monotonous fMRI tasks. When watching a movie during fMRI acquisition children’s attentiveness increases, which leads to decreased head motion compared to task-free fMRI or repetitive well-controlled experiments (Vanderwal et al., 2019). Naturalistic movie-watching paradigms have been successfully used to localize math-related brain activation and have been shown to be comparable to traditional functional localization tasks (Catlon & Li, 2013). Here, we used a movie-watching fMRI paradigm to localize functional brain responses to text and test the effects of language vs. literacy intervention programs. We designed a short movie (196s) that encompassed a sequence of short cartoon clips. Approximately half of the movie included written text, in the form of single words or letters which were either matching concurrently presented speech/song or not. FMRI data were collected while a group of pre-reading children (n=48; mean age=5.25y±0.28) watched the movie at two time points: (1) before and (2) after an intensive 2-week intervention program involving direct instruction in either (a) the foundations of literacy (n=24) or (b) oral language comprehension and grammar (n=24). Two runs of 98 volumes were acquired at each time point on a 3T Siemens scanner (TR=2s, TE=30ms, 33 slices, 3.5 mm3) along with a T1-weighted image. Following preprocessing with FMRIPREP, volumes with scan-to-scan framewise displacement >0.9mm were flagged to perform scrubbing. 34 participants met the criterion of <10% flagged volumes per run and were included in the analysis. We fit a general linear model with three conditions: movie segments with no text, with text congruent to speech, and with text not congruent to speech. An additional regressor modeled effects of visual and auditory characteristics of the different cartoon clips, along with 11 nuisance regressors. Group analysis (FWEcor p<0.05) revealed that across both time points activation for movie segments containing text compared to text-free segments was stronger bilaterally in the superior temporal gyrus (STG) and ventral occipitotemporal cortex (vOTC). Comparing segments of text with and without matching speech showed higher activation bilaterally in the vOTC for text with matching speech and higher bilateral activation in the STG for text with no matching speech. Text sensitivity in the left vOTC before intervention was correlated with individual gain in alphabetic knowledge during intervention (r=0.39, p=0.026). These results demonstrate that a short naturalistic movie-task can elicit text-specific activation in the vOTC, the part of the visual cortex that includes the visual word form area and is known to rapidly develop specialization to written language after reading instruction (Brem et al., 2010). In addition, movie segments with text also increased activation in the STG, a central brain area for language processing, phonological decoding, and audiovisual integration. This work illustrates how a naturalistic experiment can be used to evaluate neurobiological changes after controlled intervention programs in young children and suggests that text sensitivity in the vOTC could be related to individual behavioral responses to intervention.