You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

Poster B65, Wednesday, November 8, 3:00 – 4:15 pm, Harborview and Loch Raven Ballrooms

The rhythm of semantics: Temporal expectancy and context-based prediction in a picture association paradigm

Cybelle M. Smith1, Kara D. Federmeier1;1University of Illinois at Urbana-Champaign

People are able to use context to inform their expectations about what they might experience next, as well as when they might experience it. To explore the interaction between temporal expectancy and contextual congruency effects on semantic priming, we used a nonverbal picture association paradigm in which stimuli could be presented at a wider range of SOAs than is natural for linguistic stimuli. We explored how the amount and variability of predictive preparation time (i.e., the time between display of a visual scene cue and its recently associated novel object target) would modulate the timing, nature and amount of visuo-semantic processing facilitation at the target. We recorded EEG as 84 participants learned paired associations between visual scenes and novel objects from novel object categories. To examine context-based prediction at varying degrees of contextual specificity, each novel object category was only ever studied with a particular scene type (e.g. category 1 with beaches, 2 with offices, etc.). At test, participants indicated whether an object matched a previously viewed scene. The object either matched the scene, matched the scene type (but not the specific scene), or mismatched the scene type. Critically, at test, the scene was previewed for either 200ms (N=24), 2500ms (N=24), or a variable 0-2500ms (N=36) prior to object onset. When the temporal relationship between prime and target was fixed, ERPs time-locked to object onset at test displayed a graded pattern of facilitation: match > within-category mismatch > between-category mismatch. The time-course of this sensitivity, but not its gradedness, varied with preview duration. With consistently long previews, graded facilitations emerged during the N300 time window. Instead, when participants had little time to develop predictions, graded facilitation effects emerged only later, beginning at ~300-400ms. This latency shift suggests that visual image predictions carry more visually detailed (but not necessarily more context-sensitive) information and affect processing earlier in the visual processing stream with greater preparation time. We next varied the preview time pseudorandomly and continuously (0-2500 ms). By treating preview time as a continuous covariate, we were able to further assess the amount of preparation time necessary to observe effects of visual prediction. N300 effects of prediction emerged at ~500-1000ms preview time and remained stable thereafter, placing an approximate lower bound on the time to preactivate global visual structure of an image. Furthermore, when the participant could not be sure precisely when in time the object target would appear, facilitation effects were no longer graded but equally large for between and within category mismatches. This suggests that knowing precisely when in time a stimulus will appear, and not simply that it’s coming next, facilitates full exploitation of predictive context for pre-activating visual and semantic features. This builds on more general findings that priming effects are larger in cases of high temporal expectancy – we add to this that the degree of predictive sophistication is also enhanced. Thus, the characteristic rhythmicity of language, and not merely predictability of its sequential order, may play a role in facilitating semantic access during language comprehension.

Topic Area: Perception: Orthographic and Other Visual Processes

Back to Poster Schedule