My Account

Poster D80, Wednesday, August 21, 2019, 5:15 – 7:00 pm, Restaurant Hall

Neural representations for spoken words are influenced by the iconicity of their sign translation equivalents in hearing, early sign-speech bilinguals

Samuel Evans1,2, Cathy J Price3, Joern Diedrichsen4, Eva Gutierrez-Sigut2, Mairéad MacSweeney2;1Psychology Department, University of Westminster, 2Institute of Cognitive Neuroscience, University College London, 3Wellcome Trust Centre for Neuroimaging, University College London, 4Brain and Mind Institute, University of Western Ontario

How do representations in one language influence the structure of representations in another? Sign-speech bilinguals use languages that differ in their articulators and modality of expression. They also differ in many of their linguistic features, for example, in speech, words rarely sound like the meaning that they convey, whereas signs have greater potential to exploit visual iconicity. Despite these differences, there is evidence of significant co-dependence between speech and sign in speech-sign bilinguals, such that the properties of sign translation equivalents influence performance on tasks involving speech or written words (Morford et al. 2001; Shook & Marian 2012; Giezen & Emmorey, 2016). Here, we extend these findings, by testing the hypothesis that experience of sign language changes the structure of neural representations of translation equivalent spoken words in sign-speech bilinguals. We scanned seventeen right-handed participants in a 3T MRI scanner. All participants were hearing, British Sign Language (BSL) users that learned BSL before 3 years of age and self-reported a high level of BSL proficiency. In the scanner, participants took part in a semantic monitoring task whilst they attended to signs and equivalent spoken words produced by male and female language models. Outside the scanner, participants rated the iconicity of the signs (mean across participants = 3.98/7, min = 2.24, max = 5.44). Representational similarity analysis (RSA) was used to quantify the dissimilarity between neural patterns evoked by the same lexical items presented as spoken words and signs. The observed speech-speech and sign-sign representational distances were correlated with two models of brain function. Model (1) an item-based dissimilarity model that predicts greater dissimilarity between different as compared to the same lexical item and model (2) an orthogonal, iconicity-based dissimilarity model generated by calculating the absolute difference between the mean iconicity rating provided for each item. Note that the iconicity model did not correlate with a model expressing the semantic dissimilarity between items (r = -0.126, n = 17, p=0.465). To identify regions of interest, we used a searchlight analysis to find speech and sign specific neural responses, defined as regions in which there were larger representational distances for sign relative to speech, and vice versa. Correcting for tests in five regions, the response in the left V1-V3 (peak at [-6 -98 16]) showed a fit to both the item-based and iconicity-based models in the sign-sign distances, indicating a sensitivity to sign iconicity in the primary and secondary visual cortices. For speech, correcting for tests in four regions, the response in the left STG (peak at [-56 -8 2]) showed a significant fit to the item-based model and also crucially to the iconicity-based model in the speech-speech distances, indicating that neural patterns evoked by spoken words are influenced by the iconic properties of the equivalent signs. These findings suggest that the structure of neural representations for speech may be changed by long-term, early exposure to sign language. Further work, comparing these findings to hearing non-signers, with a larger number of lexical items, is necessary to confirm this.

Themes: Signed Language and Gesture, Multilingualism
Method: Functional Imaging

Back