My Account

Poster C62, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

Modelling incremental development of semantic prediction

Hun S. Choi1, Barry J. Devereux2, Billi Randall1, William Marslen-Wilson1, Lorraine K. Tyler1;1Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, 2School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast

During spoken language comprehension, prediction is one of the core incremental processing operations, guiding the interpretation of each upcoming word with respect to its preceding context. As words in a sentence accumulate over time, the constraints they generate become more specific and informative. The predictive processing framework provides an explicit hypothesis about incremental speech comprehension which has been tested and supported in previous studies with the quantification of “entropy” (Hale, 2006; Frank, 2013) and “surprisal” (Hale, 2001; Levy, 2008). Within this broad context, here, we investigated the role of semantic constraint elicited by each word in a sentence, including its subject, verb and object, in generating the event representation that guides the message-level interpretation using models of constraint and integration derived from the latent Dirichlet allocation (LDA) approach of topic modelling (O’Seaghdha & Korhonen, 2014). This approach enabled us to incorporate prior knowledge about different topics in a way that maximizes model evidence during training. Specifically, we asked people to listen to spoken sentences of the form: “The experienced walker chose the path” and tested the following hypotheses under the Bayesian belief updating framework (Kuperberg & Jaeger, 2016): a) The subject will semantically constrain its object as soon as the subject noun is recognised b) The verb’s constraint on its direct object will be incorporated into the subject’s constraint as soon as it is recognised c) The subject + verb constraint will be utilized to process their object as soon as it is integrated To test these hypotheses, we collected electroencephalography (EEG) and magnetoencephalography (MEG) data millisecond (ms) by millisecond while participants were listening to each of 200 natural sentences. To explore the spatial dynamics within the neural language network (Kocagoncu et al., 2017), we analyzed these data in source space using representational similarity analysis (RSA; Kriegeskorte et al., 2008). In this way, we preserved the rich multivariate response pattern of neural activity spanned by source vertices (space) and time using a small searchlight sphere, and correlated this response pattern with each of different patterns predicted by models of constraint and integration. Our results confirmed all three hypotheses above: a) The constraint (entropy) placed by the subject on its object is generated as early as the onset of subject noun, lasting for about 300ms in right anterior and superior temporal areas (RTP/RSTG) and soon re-emerging around the verb-onset until the verb was recognised in right IFG. This was, then, replaced by the integrated (SN+verb) entropy effect on the object. b) The integrated (subject + verb) constraint entropy effect was observed around the uniqueness point of the verb, replacing the subject-alone entropy effect in the right middle temporal gyrus (RMTG). c) The surprisal effect of the integrated (SN+verb) constraint on the object was clearly reflected around the offset of the object noun, lasting for more than 200ms in RSTG/RMTG. Taken together, these results provide neurobiological evidence for cyclical development of semantic constraint to complete the event representation in the regions involved in semantic constraint and integration (Jung-Beeman, 2005).

Themes: Computational Approaches, Meaning: Combinatorial Semantics
Method: Electrophysiology (MEG/EEG/ECOG)

Back