Slide Sessions

< Slide Sessions

Slide Session C

Thursday, October 7, 2021, 5:00 - 6:30 pm PDT Log In to set Timezone

Neural pattern changes underlie successful sound-to-category mapping

Zhenzhong Gan1,2, Suiping Wang2, Patrick C.M. Wong1, Bharath Chandrasekaran3, Gangyi Feng1; 1The Chinese University of Hong Kong, Hong Kong SAR, China, 2South China Normal University, Guangzhou, China, 3University of Pittsburgh, Pittsburgh, USA

Learning to map multi-dimensional, continuous acoustic signals to discrete categories is fundamental to auditory perception. Successful sound-to-category mapping requires the brain to efficiently reorganize during learning to encode novel categories and guide categorization decisions. Previous studies (e.g., Feng et al., 2019, 2021) have demonstrated that inferior frontal and superior temporal regions within the core auditory system play important roles in representing newly acquired speech and auditory categories. However, it is not yet clear how the auditory neural system changes in real-time associating with successfully sound-to-category mapping during online learning. Here we conducted a functional magnetic resonance imaging (fMRI) experiment with a feedback-based sound-to-category training paradigm. We leveraged multivoxel pattern analysis to examine the extent to which changes in sound-related neural activation patterns during learning are associated with ultimate learning success. This design and analysis enable us to reveal the dynamic neural changes during the learning process (as compared to the pre-and-post training designs). Two groups of participants (N = 60) learned to categorize 40 ripple sounds into four categories but differing category structures with trial-by-trial corrective feedback in six training blocks (240 training trials in total). One group of participants learned rule-based (RB) categories, hypothesized to involve an explicit sound-to-rule mapping network while the other group of participants learned information integration (II) based categories, involving a procedural-based sound-to-reward mapping network. We estimated activation patterns of each sound in each block and calculated the neural pattern dissimilarities (NPDs) between each pair of blocks and sounds to represent changes in neural patterns from block to block. These NPDs were then correlated with a dissimilarity matrix derived from the learning outcomes (i.e., post-training accuracies) of the sounds. This outcome dissimilarity matrix reflects variabilities in successfully mapping a sound into a category. Significant correlations between NPDs and the learning outcome matrix suggest a close relationship between neural pattern changes and sound-to-category learning success during learning. We employed a searchlight approach to identify brain regions that show such a relationship. The searchlight-based neural-behavioral correlation analysis revealed positive correlations between neural pattern changes and learning outcomes for both II and RB learners in a distributed frontoparietal network, including the left middle frontal gyrus (MFG), left inferior parietal lobe (IPL), bilateral inferior frontal gyrus (IFG), bilateral precuneus, and bilateral medial prefrontal cortex (mPFC), which suggests that greater neural pattern changes during learning in this frontoparietal network is associated with more successful mapping. We did not identify any regions that show significant negative correlations. We did not find any regions that showed significant differences in correlation between II and RB learners, which suggests that the neural pattern changes in the frontoparietal network may be a category-general neural mechanism underlying sound-to-category learning success. These findings demonstrate a novel neural plasticity mechanism during the online learning process where updates or changes in neural activation patterns in the frontoparietal network support online sound-to-category mapping.

Timing embodied semantics, across and within the brain: A combined EEG/iEEG study on face-related nouns

Adolfo García1,2,3, Eugenia Hesse1,3, Agustina Birba1,3, Federico Adolfi3, Ezequiel Mikulan4, Miguel Martorell Caro3, Agustín Petroni5, Tristán Bekinchstein6, María del Carmen García7, Walter Silvia7, Carlos Ciraolo7, Esteban Vaucheret7, Lucas Sedeño3, Agustín Ibáñez1,2,3,8; 1Universidad de San Andrés, Buenos Aires, Argentina, 2Global Brain Health Institute, University of San Francisco, California, and Trinity College Dublin, 3National Scientific and Technical Research Council, Argentina, 4University of Milan, 5Universidad de Buenos Aires, Argentina, 6University of Cambridge, 7Hosptal Italiano de Buenos Aires, Argentina, 8BrainLat Institute, Santiago, Chile

During semantic processing, the brain recruits multimodal conceptual systems and embodied mechanisms grounding modality-specific information. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions, mainly because most research, focused on action-related words, has been undermined by varied motor artifacts. Here we combined EEG and iEEG to examine when nouns denoting facial body parts (FBPs) and non-facial body parts (nFBPs) are discriminated and individually classified in face-processing and multimodal networks. In two experiments, participants completed a semantic decision task involving 21 FBP nouns (e.g., nose) and 21 nFBPs (e.g., chest), marched for nine psycholinguistic variables and presented amid diverse filler items. The EEG experiment involved 25 young healthy participants. Signals were recorded online with a 128-channel system. ERP analysis of face-sensitive N170 modulations focused on two temporo-occipital four-electrode regions of interest, via Monte Carlo permutation tests (1000 permutations) combined with bootstrapping (p < .05, FRD-corrected). The iEEG experiment comprised two young patients with intractable epilepsy undergoing intracranial monitoring. Both had electrodes were implanted in key hubs of the face-processing network (right fusiform, ventral/rostral lingual, and calcarine gyri) and a multimodal semantic network (angular and supramarginal gyri). Time-frequency charts were obtained to identify differential modulations between conditions in each network (across patients) for the 1-20 Hz frequency range, sensitive to both semantic and facial processing. Digitized signals were analyzed using a windowed Fourier transform. Significant power changes were analyzed across time against baseline values and between conditions with non-parametric bootstrap tests with 2000 permutations (p < .05, FRD-corrected). Multivariate pattern analyses, via support vector machines, were also used to examine the classification efficiency of signals from both patients related to FBP and nFBP words between 1 and 20 Hz, for each network separately. Finally, EEG task-related connectivity was examined in an early (0-200 ms) and a late (200-400 ms) window, considering all electrodes across the scalp, via a non-linear method called weighted Symbolic Mutual Information (wSMI). The same metric was used to calculated iEEG connectivity considering the same windows for each pair of electrodes within both networks for each patient separately. Results revealed four main patterns. First, relative to nFBP words, nouns denoting FBPs increased N170 amplitude (a key signature of early facial processing) over the right hemisphere. Second, iEEG-derived time-frequency patterns showed that FBP words triggered fast (~100 ms) activity boosts within the face-processing network, mirrored by later (~275 ms) effects in multimodal circuits. Third, iEEG recordings from the face-processing network allowed decoding ~80% of items within the first 200 ms, while classification based on multimodal-network activity only surpassed ~70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in the early than the late window. Collectively, these findings indicate that semantic differentiations can spring via fast sensorimotor reenactments and rapid interplays with cross-modal conceptual systems. Accordingly, they challenge views which reject an inceptive role of embodied mechanisms in semantic processing as well as those that reduce semantic processing exclusively to embodied reactivations.

Divide & Concur: A Predictive Coding Account of the N400 Event-Related Potential Component

Samer Nour Eddine1, Trevor Brothers1,2, Lin Wang1,2, Michael Spratling3, Gina Kuperberg1,2; 1Tufts University, 2Massachusetts General Hospital, 3King's College London

Predictive coding is a prominent theory of cortical function that is increasingly invoked to explain aspects of language processing. According to this theory, the brain infers high-level structure from low-level sensory data through iterative cycles of top-down prediction, bottom-up prediction error, and incremental belief updating. In language comprehension, the N400 response has often been linked to “prediction error”, but there has been no prior attempt to explicitly model this ERP component within a predictive coding framework. Here, we developed a computational model of word comprehension based on predictive coding principles. Our goal was to determine if the activation dynamics of this model could accurately reproduce the time-course of the N400 response and its sensitivity to a variety of lexical and contextual factors. Based on a modified interactive-activation architecture (Spratling, 2016), our model includes three hierarchical levels of linguistic representation: orthographic, lexical, and semantic, with distinct state units and error units at each level. On each iteration, state units at a given level predict states at the level below. Any mismatch between these predictions and the true state generates a prediction error (PE), which is passed up to “correct” the state that generated this incorrect prediction. Over time, errors are minimized as the model settles on a correct lexical and semantic state that can accurately explain the bottom-up orthographic input. We operationalized the N400 as the summed lexical and semantic PE produced by the model, averaged in a 10-iteration window around the error’s peak. Lexical Simulations: We selected 512 four-letter words that orthogonally varied in frequency, concreteness and orthographic neighborhood size (ON). Concreteness was implemented as the number of semantic features associated with each word (9 vs. 18). ON was measured as the mean Levenshtein-distance between each word and the 20 nearest neighbors in the model lexicon, and frequency was implemented by biasing the model’s feedback weights based on each word’s corpus frequency. Results: As expected, model PEs reproduced the characteristic rise-and-fall of the N400 response. Consistent with human ERP data, the lexico-semantic PE was enhanced for words with additional semantic features, words with more orthographic competitors, and words with lower frequencies. Contextual Simulations: To simulate the effects of prior linguistic context, we presented word-pairs that were either repetitions (LIME–LIME), semantic associates (SOUR–LIME) or unrelated (BANK–LIME). To simulate contextual predictability, the higher-level state units associated with each word were clamped to activations proportional to their predictability(i.e. cloze), before presenting any bottom-up input. Results: Similar to human readers, lexico-semantic PEs were attenuated by both word repetition and semantic priming. We also observed a graded reduction in PEs as a function of word predictability. Finally, this model also reproduced interactions between lexical and contextual factors, with smaller effects of frequency and concreteness for predictable words. Discussion: Together, these findings suggest that predictive coding can provide a parsimonious and biologically-motivated account of evoked neural activity during word comprehension. Moreover, this approach can potentially simulate behavioral responses, such as lexical decision times, and make novel empirical predictions for neuroimaging studies. References: Spratling, Cogn Proc, 2016

The Effect of Information Structure on Word Order Processing: An fMRI Study

Hyeonjeong Jeong1, Jungho Kim2, Cui Haining1, Sachiko Kiyama1, Masataka Yano3, Masatoshi Koizumi1; 1Tohoku University, Sendai, Japan, 2Kyoto Women’s University, Japan, 3Tokyo Metropolitan University, Japan

Many languages grammatically allow canonical and non-canonical word orders. Previous neurolinguistic studies have reported greater involvement of the left inferior frontal gyrus (IFG) in the processing of non-canonical word orders (e.g., object-subject-verb [OSV] in Japanese) than its canonical counterparts (e.g., SOV in Japanese) because of its demand for syntactic computation (Kinno et al., 2008). Some behavioral and ERP studies have found that discourse factors, such as information structure, partially attenuate the processing demand of non-canonical word orders (Kaiser & Trueswell, 2004; Yano & Koizumi, 2018). To clarify the precise neural mechanism underlying syntactic computation of non-canonical sentences with relevant contextual effects, we conducted an fMRI study on sentence processing by manipulating word order and information structure. In particular, we intended to identify neural networks sensitive to (i) syntactic structure, (ii) information structure, and (iii) their interaction. The participants were 37 healthy, right-handed, native Japanese speakers (mean age: 20.8±1.7, 16 females). A total of 480 pairs of Japanese sentences were adopted from our previous ERP studies. Each pair consisted of a prior sentence and a target sentence. Half of the target sentences employed a canonical word order (SOV), and the rest used a non-canonical word order (OSV). There were two types of target sentences: one in which the initial noun phrase (NP) refers to discourse-old information (i.e., the entity mentioned in the prior sentence) and the second NP refers to a discourse-new entity (the given-new order) and the other in which the new information comes first (the new-given order). By manipulating the word order (SO vs. OS) and information structure (given-new vs. new-given) of the target sentences, four conditions were created (SgivenOnew, SnewOgive, OgiveSnew, OnewSgive). During the fMRI scanning, the prior sentence was presented in its entirety, and each phrase of the target sentence was presented one by one. To check whether participants understood the sentences, comprehension questions were asked randomly at the end of half of the trials. We tested the main effect of the word order (OS vs. SO), the information-structure (new-given vs. given-new), and the interaction between these two factors. Statistical analyses were performed with SPM12 using a random-effects model (corrected to p<0.05 by cluster size). Three major findings emerged. First, analyses of the main effect of word order ([OnewSgive+OgivenSnew] > [SnewOgiven+SgivenOnew]) revealed significantly greater activation in the left opercular part of the IFG (BA44), left premotor areas (BA6), and left posterior part of the superior temporal gyrus. Second, irrespective of the type of word order, the new-given conditions [OnewSgive+SnewOgiven] elicited greater activation than the given-new conditions [OgivenSnew+SgivenOnew] in the bilateral inferior parietal lobules and left triangular part of the IFG (BA45). Third, although no significant interaction was observed, the OnewSgive condition elicited greater activation in the left opercular part of the IFG (BA44) compared to the other three conditions. Taken together, these findings indicate that the left opercular part of the IFG (BA44) has a vital role in integrating the demands of syntactic computation and information-structural compensation (cf. Van Leeuwen et al., 2014).

< Slide Sessions

SNL Account Login

Forgot Password?
Create an Account

News