My Account

Poster C52, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

Similarity of cortical semantic representations during language production and comprehension

Hiroto Yamaguchi1,2, Tomoya Nakai1,2, Shinji Nishimoto1,2;1CiNet (NICT), 2Osaka University

[Introduction] We use language to send and receive messages that convey semantic meanings. By using an encoding modeling analysis, previous studies have revealed the semantic representation of language in the brain while subjects listened to radio stories (Huth et al., 2016; de Heer et al., 2017). However, it remained unclear if the revealed representation is recruited in other conditions including language production. To address this issue, we conducted functional MRI (fMRI) experiments under language comprehension (reading and listening) and language production (speaking and thinking) conditions. We performed encoding modeling analysis to estimate the semantic representation in the brain under each condition and compared the modeled representations between conditions. [Methods] We recorded whole brain activity using fMRI (Siemens MAGNETOM Prisma) under the following two experiments. In language comprehension experiment, Japanese monologues were presented to 5 Japanese participants (age 22-29; all right-handed) under two conditions. Under the reading condition, participants read transcribed narratives. Under the listening condition, they listened to spoken narratives. We presented the total of three-hour narratives for each condition, respectively. In language production experiment, in each trial, a random word or picture was presented to participants. Using the presented content as a hint, participants spontaneously constructed a sentence of up to 4 seconds and articulated it within 4 seconds after a cue (speaking condition). After the articulation, they subvocalized the same sentence without making any movement (thinking condition). Each participant performed 900 trials of sentence productions. In this abstract, we only report a result on the speaking condition. To estimate the semantic representation during narrative comprehension, we firstly transformed presented or produced sentences into semantic vectors using Wikipedia2Vec model (Yamada et al., 2018). Secondly, we modeled the brain activity in each voxel as a weighted linear sum of the semantic vectors using L2-regularized linear regressions. The regressions were performed independently for reading, listening, and speaking conditions. To validate the estimated representation, we calculated model prediction accuracy of brain activity for held-out test dataset of each condition. For the significantly predicted voxels, we evaluated the similarities of acquired semantic representations between conditions by calculating the correlation coefficient between the estimated weights. [Results] The trained model for each condition provided significantly accurate prediction in broad cortical areas including frontal, temporal, and parietal regions. By comparing the weights between reading and listening conditions, we found significantly similar semantic representations in frontal, temporal, and parietal regions. Between the comprehension and production conditions, more constricted regions such as inferior frontal sulcus, superior frontal sulcus, superior temporal sulcus, and intraparietal sulcus showed significantly high similarities. Those areas were a subset of the areas similar between reading and listening. [Conclusion] Our results showed that broad brain regions represented the meaning of the words during story comprehension in a modality-invariant way. A subset of those regions had similar semantic representations during sentence production, even though the semantic contents and the degree of linguistic complexity were different between the production and comprehension conditions. The current study revealed shared semantic representations for language comprehension and production.

Themes: Language Production, Meaning: Lexical Semantics
Method: Functional Imaging

Back