Poster B9, Thursday, August 16, 3:05 – 4:50 pm, Room 2000AB
Language-driven anticipatory eye-movements in naturalistic environments
Evelien Heyselaar1, David Peeters1,2, Peter Hagoort1,2;1Max Planck Institute for Psycholinguistics, The Netherlands, 2Radboud University, Donders Institute for Brain, Cognition, and Behaviour, The Netherlands
Recently, there has been an increased interest in linguistic prediction, the idea that we anticipate upcoming linguistic input. One principle question that has received very little attention, however, is when, i.e. in which everyday contexts, we predict. Although ample evidence indicates that we are able to predict, whether we do so outside the laboratory, and whether we do it all the time, are open questions that have been difficult to answer. In the current three-part study, we use an updated version of the visual world paradigm (VWP) to answer the when question, and, while doing so, we also provide empirical evidence for characteristics of prediction that have, as of yet, received little to no empirical support. Although the traditional VWP has been applauded for a more naturalistic way to measure the interaction between language and the visual world, it is not without its limitations (2D line images, source-less spoken sentences, etc.). In Experiment 1 we therefore immersed participants in rich, everyday 3D environments in Virtual Reality, with a virtual agent directly delivering the critical sentences in a monologue to the participant. Experiment 1 tested whether this increased richness still induced anticipatory eye-movements. As we are rarely in an environment with a low number of objects, in Experiment 2 we increased the number of objects present to determine whether participants are still able to anticipate the referent object given the critical verb. Thereby we were also able to test whether working memory plays a role in anticipatory behaviour, as is proposed in current theories of linguistic prediction. In Experiment 3 we manipulated the probability of hearing a predictable sentence, thereby testing for the role of error-based learning on anticipatory behaviour. Participants showed the traditional anticipatory saccadic behaviour: they fixated the referent object before it is named when the object was predictable given the verb, but no early fixations where observed when the object was not predictable. We observed this behaviour for all three of our experiments, although significantly less when there were more objects (Experiment 2) or less predictable sentences (Experiment 3) compared to Experiment 1. Thus, when manipulating the probability of hearing a predictable sentence, we still found robust anticipatory eye-looks to objects, even when only 25% of the sentences predicted said object. Additionally, we show a significant difference in anticipatory behaviour as a function of working memory capacity such that participants with a lower capacity showed less robust anticipatory behaviour. Overall, our study suggests that participants do constantly and consistently predict, even when it seems inefficient to do so. Our study is one of the first to attempt to measure linguistic anticipation in naturalistic environments, and although some of our results support the proposed mechanisms underlying linguistic prediction (i.e. the role of working memory), some of our results were unexpected (i.e. no statistical learning effect). Therefore, our study highlights the need to focus on what participants actually do in naturalistic environments, not what they can do in controlled laboratory settings.
Topic Area: Perception: Speech Perception and Audiovisual Integration