Slide Slam B6
fMRI activation patterns associated with linguistic vs. visual predictive cues in the visual-world paradigm
Jennifer Mack1; 1University of Massachusetts-Amherst
Purpose. In the visual-world paradigm, participants listen to words or sentences and view visual stimulus arrays as their eye movements are tracked. This eye-tracking paradigm has been critical for understanding linguistic prediction. We used a novel task to identify fMRI activation patterns associated with prediction and subsequent processing of auditory words within the visual-world paradigm. Methods. 20 young adults participated in a combined event-related fMRI/eye-tracking experiment. In each trial, two pictures (a single object and a pair of identical objects) sequentially appeared and disappeared from rectangular boxes, followed by a cue as to which picture would re-appear in a box. In the “Language-Visual World (VW)” condition, the auditory linguistic cue (“Here is one …”/”Here are two …”) allowed participants to predict the upcoming word/picture as well as its future location, resulting in anticipatory eye movements. The same linguistic cues were used in the “Language-Central” condition but all visual stimuli were presented centrally, thus minimizing visuospatial processing demands. In the “Visual-VW” condition, a fixation cross cued eye movements to the object’s future location. The duration of the predictive window was jittered across trials (cf. ). In all conditions, the cued picture then re-appeared simultaneously with an auditory word and participants performed a word-picture matching task (button press for match (87.5% of trials); no response for mismatch). There were four runs, each with 16 trials/condition. Previous analyses of the behavioral data  demonstrated that Language-VW cues reliably elicited anticipatory fixations to the object’s future location, and RTs were significantly faster for Language-VW/Language-Central than Visual-VW trials, suggesting that linguistic cues facilitated subsequent word processing. fMRI analysis (SPM12) consisted of preprocessing and GLM specification in which three subevents (picture presentation, predictive cue, word-picture matching) were modeled for each condition. Whole-brain analyses were conducted (voxel-wise FWE = 0.05, k ≥ 3). Activation patterns associated with linguistic predictive cues were identified by comparing Language-VW to Visual-VW. Activation patterns associated with visuospatial processing were identified by the contrast Language-VW>Language-Central. Results. During the predictive cue, greater activation was found for Language-VW > Visual-VW in the right pars orbitalis, whereas visual predictive cues (Visual-VW>Language-VW) elicited greater activation in the bilateral lateral occipital gyri. During word-picture matching, the Visual-VW>Language-VW contrast yielded activation in the right superior temporal gyrus, whereas the reverse contrast yielded no significant results. Additionally, Language-VW>Language Central elicited greater activation in the right precuneus during word-picture matching. Conclusion. Consistent with the RT results, language cues appear to have facilitated the prediction of an auditory word form. When presented with a language cue (vs. a visual cue), participants showed increased activation in the right pars orbitalis, which may be an index of word prediction. Language cues also led to a relative decrease in activation in the right STG during word processing, which may reflect pre-activation of the auditory word form. This study demonstrates the utility of combined visual-world eye-tracking/fMRI to detect neural correlates of word prediction and processing. References  Mack, 2021. Lang. Cogn. Neurosci.  Bonhage, 2015. Cortex, 68.