Poster B33, Thursday, August 16, 3:05 – 4:50 pm, Room 2000AB
The neural basis of concrete noun and verb meanings in congenitally blind individuals: An MVPA fMRI study
Giulia Elli1, Rashi Pant1, Rebecca Achtman2, Marina Bedny1;1Johns Hopkins University, 2DePauw University
How is the neural basis of concrete word meanings (e.g. fox, sparkle) influenced by sensory experience? We used univariate and multivoxel pattern analyses (MVPA) to compare noun and verb responsive networks between congenitally blind (N=16) and sighted (N=16, blindfolded) individuals. Participants listened to pairs of words and judged their semantic similarity during fMRI scans. Words were blocked by semantic category. There were 4 verb categories: sound emission (“to boom”), light emission (“to sparkle”), mouth action (“to bite”), hands action (“to caress”); and 4 noun categories: birds (“the crow”), mammals (“the lion”), natural places (“the marsh”), manmade places (“the shed”). Consistent with previous findings, univariate analysis in sighted participants revealed partially non-overlapping networks active during noun (Inferior Parietal lobule, IP; Precuneus, PC; Inferior Temporal Cortex, IT) and verb processing (Middle Temporal Gyrus, MTG). We find that these networks show similar patterns of selectivity with univariate analysis in blind individuals. Using the univariate results, we defined subject-specific verb- and noun-preferring regions of interest (ROIs). Within each ROI, a linear support vector machine (SVM) classifier was trained to decode among verbs and among nouns. In both groups, we observed a double-dissociation in sensitivity to distinction among verbs and among nouns: classification was significantly more accurate for verbs than nouns in the MTG and for nouns than verbs in IP and PC (2Group x 3ROIs x 2WordType repeated-measures ANOVA: Group F(1,30)=12.43, p=0.001; ROI F(2,60)=12.09, p<0.000; WordType F(1,30)=18.76, p<0.000; Group x ROI F(2,60)=3.37, p=0.04; ROI x WordType F(2,60)=22.32, p<0.000; all other effects Ps>0.05). Furthermore, in IP and PC, classification among concrete nouns was successful in both groups (Sighted: IP t(15)=9.02, PC t(15)=8.0; Blind: IP t(15)=4.76, PC t(15)=4.52; Ps<0.000). However, MVPA revealed some between-group differences in neural responses to concrete nouns. First, classification in IT was above chance, and better for nouns than verbs, only in the sighted group (Sighted: verbs t(15)=4.6, p<0.001; nouns t(15)=6.63 , p<0.000; Blind: verbs t(15)=0.6; nouns t(15)=1.45; 2Groups x 2WordType repeated-measures ANOVA: Groups F(1,30)=22.8, p<0.000; WordType F(1,30)=9.64, p=0.004; Group x WordType F(1,30)=3.9, p=0.06). Second, within noun-responsive IP and PC, the classifier discriminated between mammals and birds only in the blind group (Blind: IP A’=0.62, t(15)=2.15; PC: A’=0.63, t(15)=3.93; Ps<0.005; Sighted: IP A’=0.51, t(15)=0.27; PC: A’=0.48, t(15)=0.34; Ps>0.4; 2Group x 2ROI repeated-measures ANOVA: Group F(1,30)=10.07, p=0.003; other effects Ps>0.3). This was despite the fact that overall classification was better in the sighted group (Group F(1,30)=12.43, p=0.001). In the behavioral data, blind participants also judged some birds considered highly dissimilar by sighted subjects (e.g. the crow – the parrot) to be more similar (t(29.75)=2.44, p=0.02). These findings suggest that the neural basis of word meanings is similar among blind and sighted individuals, with the MTG and IP/PC preferentially representing verbs and nouns, respectively. These results are consistent with the hypothesis that these regions encode abstract representations of events/verb-meanings (LMTG) and entity concepts (IP/PC). However, when making semantic similarity judgments about taxonomically similar animals, blind individuals may rely more on the IP/PC “entity concept” network, whereas sighted subjects rely more on IT appearance knowledge.
Topic Area: Meaning: Lexical Semantics