My Account

Poster B69, Tuesday, August 20, 2019, 3:15 – 5:00 pm, Restaurant Hall

Neurobiological modeling of the Mental Lexicon

Hartmut Fitz1,2, Dick van den Broek2;1Donders Centre for Cognitive Neuroimaging, Radboud University, 2Neurobiology of Language Department, Max Planck Institute for Psycholinguistics

Language processing requires long-term memory where linguistic units are stored and maintained (Mental Lexicon). Theories of the Mental Lexicon differ on what these units are, how much internal structure they have, and how they are being retrieved. Here, we investigate the neurobiological basis of the Mental Lexicon. That is, we are asking how engrams evolve in neurobiological infrastructure, how they remain stable despite ongoing plasticity, and how they can be reactivated from partial retrieval cues. We addressed these questions through the simulation of sparsely-connected recurrent networks of spiking neurons (5,000) with conductance-based synapses (20% connectivity), similar to Litwin-Kumar & Doiron (2014). Neurons were either excitatory (E, adaptive exponential) or inhibitory (I, leaky integrate-and-fire). Networks were driven by sequences of words (15,000) corresponding to English sentences, followed by an idle period without language input (12 min). There was noisy, random background activity across both phases. Words were represented as Poisson spike trains (8 kHz) that targeted subpopulations of excitatory neurons. Each word consisted of a lexeme and phonological, syntactic and semantic features, and was presented for a duration proportional to phonemic length (e.g., cat: k|æ|t = 300 ms). There were 93 word features in total. Inhibitory plasticity (iSTDP) was used on I->E synapses to balance the network and achieve asynchronous, irregular firing (Vogels et al., 2011). Voltage-based spike timing-dependent plasticity (STDP) was used on E ! E synapses to model learning in terms of long-term potentiation and depression (Clopath et al., 2010). To counteract instability due to Hebbian plasticity, synaptic normalization was used on E->E synapses (Turrigiano et al., 2008). These different plasticity mechanisms were switched on throughout the simulations. During learning, word features were rapidly encoded into strongly connected cell assemblies while connectivity between assemblies was depressed. These engrams emerged from naive, random networks through STDP. The functional role of iSTDP was to prevent runaway processes and winner-take-all synaptic dynamics. These findings confirm previous results with similar networks that used a different stimulation protocol and inputs without internal structure (Litwin-Kumar & Doiron, 2014). To test word retrieval, networks received phonological input and had to activate lexemes, and syntactic and semantic features from these partial cues. Lexemes were activated with high accuracy (84%) but syntax and semantics remained below 50%. Although fine-grained features were encoded into the neurobiological substrate, it was difficult to retrieve them from phonological cues alone. During the idle period with unspecific background activity, engrams spontaneously reactivated. Strongly inhibitory synapses depotentiated while excitatory ones remained stable. This homeostatic balancing consolidated memories and facilitated retrieval. We also found that engram formation required a pool of excitatory neurons that were not stimulated by language input. This motivates a layered network structure where extrinsic input targets some layers but not others. This work provides a first step towards explaining how the Mental Lexicon could be implemented at the neurobiological level. Storage and maintenance were robust but the retrieval of features over time lacked precision. Future work needs to address how contextual information from sentence-level processing can further constrain lexical selection.

Themes: Computational Approaches, Speech Perception
Method: Computational Modeling

Back