Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Driving and suppressing the human language network using large language models

Poster B91 in Poster Session B, Tuesday, October 24, 3:30 - 5:15 pm CEST, Espace Vieux-Port

Greta Tuckute1, Aalok Sathe1, Shashank Srikant1,2, Maya Taliaferro1, Mingye Wang1, Martin Schrimpf3, Kendrick Kay4, Evelina Fedorenko1,5; 1Massachusetts Institute of Technology, 2MIT-IBM Watson AI Lab, 3Quest for Intelligence, 4University of Minnesota, 5Harvard University

Reading and understanding this sentence engages a set of left-lateralized frontal and temporal brain regions. These interconnected areas, the ‘language network’, support both comprehension and production (Menenti et al., 2011; Hu, Small et al., 2022) and are highly selective for language relative to diverse non-linguistic input (Fedorenko et al., 2011). However, the precise representations and computations underlying comprehension remain unknown. Enabled by progress in artificial intelligence, large language models (LLMs) have emerged as today’s most accurate models of language processing in the brain (Schrimpf et al., 2021; Goldstein et al., 2022). Despite LLMs currently being the most quantitatively accurate models of language processing, there has been no attempt to test whether LLMs can causally control neural responses to language. We asked whether LLMs are accurate enough to identify novel sentences to drive (or suppress) brain activity in new participants. We first developed an encoding model to predict brain responses in the left hemisphere (LH) language network to arbitrary new sentences. We acquired BOLD responses from 5 participants who each read a set of 1,000 diverse, corpus-extracted sentences. The model takes as input representations from GPT2-XL (previously identified as the most brain-aligned language model; Schrimpf et al., 2021) and was trained, via ridge regression, to predict the average LH language network’s BOLD response. This model explained 68% of the variance when assessed on held-out sentences from the n=1,000 sentence set. We then evaluated this model by a) identifying novel sentences (across ~1.8M sentences) that are predicted to activate the language network to a maximal (or minimal) extent (250 drive and 250 suppress sentences), and b) collecting brain responses to these sentences in 3 new participants, who read the 500 new sentences along with the original set of n=1,000 sentences. The drive sentences yielded significantly higher responses than the baseline sentences (mean BOLD increase of 86%, β=0.27, t=9.72, p<.0001), and the suppress sentences yielded lower responses than the baseline sentences (decrease of 98%, β= -0.29, t=-10.44, p<.0001). Hence, these model-selected sentences can indeed drive and suppress activity of human language areas in new individuals. We then asked: what makes drive sentences elicit strong responses? What stimulus properties is the language network most responsive to? We collected a diverse set of linguistic and semantic norms on the 1,500 sentences used in this study (across n=2,700 independent participants using web-based platforms). A systematic analysis of the model-selected sentences revealed that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. Sentences in the mid-range of the well-formedness scale elicited stronger responses than sentences on the lower and higher ends of the scales, indicating that the language network responds strongly to sentences that are ‘normal’ enough to engage it, yet complex enough to tax it. In summary, these results establish the ability of model-selected sentences to noninvasively control neural activity in higher-level cortical areas, like the language network. Furthermore, we demonstrate that brain responses to model-selected sentences can provide valuable, assumption-neutral insights into the computations underlying language processing.

Topic Areas: Computational Approaches, Reading

SNL Account Login

Forgot Password?
Create an Account

News