Slide Slam E12
A multi-perspective study of the neural representation of lexical semantics
yang yang1; 1East China Normal University
Introduction & Objective：We already found that semantic processing involves widely distributed brain regions(Patterson et al., 2007，Hinton, 1989; Plaut & McClelland, 2010). but how are semantic concepts organized and represented? There is still no common conclusion on this issue Some related studies that tested the neural basis of semantic categories rely on handpicked or corpus-derived semantic features of low interpretability, which may ignore important psychosemantic features and lead to biased results(Wang et al., 2018). Recently, spontaneous report-based large-scale semantic network models built through word association paradigms covering enough words (nodes) and associations (edges) have been proposed to contain real psychosemantic features(De Deyne et al., 2019; Jorge-Botana et al., 2018）. This may provide a more appropriate model for investigating the neural basis of semantic concept organization, but no semantic model of Chinese word associations of sufficient scale has been established. Methods：Therefore, we first established a large-scale Chinese word association semantic network model SWOW-ZH. Next, through the method of representational similarity analysis(Kriegeskorte et al., 2008), differences in neural representations of word concepts measured by functional magnetic resonance imaging were compared with the semantic distance of these words in SWOW-ZH and two-word embedding models: Word2Vec and ConceptNet. Stimulus words and semantic categories were determined in a data-driven approach. A semantic community detection algorithm was applied on each of the three semantic models, resulting in 72 two-character words from 9 communities shared by the models. Representational dissimilarity matrices of the stimuli were constructed at three levels of granularity: the community level (the coarsest), the cluster level, and the node level (the most fine-grained). regions of interest were independently analyzed for the representational similarities using the semantic models at the three granularity levels. Results & Discussion：We found that neural representations of the similarity patterns of word concepts were most consistent with SWOW-ZH at all three granularity levels at the whole-brain level. Specifically, at the community level, most similar regions to SWOW-ZH include a wide range of the anterior temporal lobe, the temporoparietal, and the lateral occipital areas. While at the node level, significant similarities are mostly localized in the anterior temporal lobe. These results reveal that：first, representations of word semantics in the word association network SWOW-ZH are more similar to the neural representation of semantics than the word embedding models, suggesting that it encodes a wider range of mental semantic features. Second, neural semantic representations are found hierarchically structured along the gradient from temporooccipital cortices to the anterior temporal lobe. Third, the anterior temporal lobe encodes semantic information of different granularities, supporting the Hub-and-spoke Model the proposal that the anterior temporal area is the hub of integrating semantic features from multiple sources and encoding semantic similarities(Patterson & Lambon Ralph, 2016; Patterson et al., 2007). Conclusion：Overall, the findings have advanced our understanding of the neural representations of semantic knowledge, and have highlighted that constructing semantic networks from large-scale behavioral data might make important contributions to studying semantics in various fields.