Slide Slam

< Slide Slam Sessions

Slide Slam E14

Not all inference is the same : Dissociable forms of human inference during narrative comprehension revealed by NLP language models

Slide Slam Session E, Tuesday, October 5, 2021, 5:30 - 7:30 pm PDT Log In to set Timezone

Takahisa Uchida1, Nicolas Lair2,3, Hiroshi Ishiguro1, Peter Ford Dominey2,3; 1Ishiguro Lab, Graduate School of Engineering Science, Osaka University, 2INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, 3Robot Cognition Laboratory, Marey Institute

A vast experimental literature demonstrates that language comprehension relies on the ability of comprehenders to access general event knowledge that is not explicitly stated in the narrative. The current research investigates how such event knowledge may be coded in language models developed in the context of natural language processing (NLP) in machine learning. We investigate inference on events using two well documented protocols from Metusalem et al. 2012 and McKoon &Ratcliff 1986 (hereafter Metusalem and McKoon), and demonstrate dissociation in the relation between local semantics vs event-inference depending on the language model. In Metusalem, subjects are exposed to a sentence, or to the same sentence preceded by an event evoking discourse, and then tested on one of three types of words: Expected, Unexpected-Related, Unexpected-Unrelated. In the sentence context, N400s are increased for both Unexpected types with respect to Expected. In the event-evoking discourse, the N400 for the Unexpected-Related type is rescued, revealing access to event knowledge that allows inference. In McKoon, subjects are exposed to one of two sentences that either evoke a context, e.g. about writing a letter, or use many of the same words but do not evoke that context. Subjects are slower to report that the target word did not appear in the sentence, only for the context-evoking sentences, revealing access to event representations that prime the target word. We previously reproduced the Metusalem results using a discourse vector made by averaged Wikipedia2Vec embeddings (Uchida et al. 2021). In the current research, we compared inference performance (Unexpected-Related vs Unexpected-Unrelated) and simple Semantic performance (related vs. unrelated word pairs from Chwilla et al. 1995) for 22 language models based on word2vec and GloVe, and found a highly significant correlation between Semantic and Metusalem-Inference. We made the same comparison between Semantic performance and McKoon-Inference. In this case, increased performance on the Semantic task did not correspond to increased performance on the inference task. This indicates that inference as measured by Metusalem and McKoon rely on dissociable processes in the context of word2vec-based models. We further analyzed these processes by replicating the study using 23 Bert language models. Bert is designed to encode sentence context, and we thus predicted that it would demonstrate a more robust correlation between semantic and inference processing. Indeed, we observed the strong correlation between Semantic and inference processing for the Metusalem task, as before, and no relation between Semantic and Inference processing for McKoon inference. This indicates that inference as measured by Metusalem and McKoon rely on dissociable processes in the context of Bert-based models. Inference as assessed by Metusalem and McKoon rely on dissociable processes. Word2vec and Bert both allow modeling of local semantic processing. For comprehension that relies on access to knowledge of events, Bert provides a model that is more consistent with human processing as assed by Metusalem, but not by KcKoon. Future research should address the computational underpinnings of these phenomenological observations.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News