You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.


Poster A57, Thursday, August 16, 10:15 am – 12:00 pm, Room 2000AB

Brain activation for spoken and sign language in infancy: Impact of experience in unimodal and bimodal bilinguals

Evelyne Mercure1, Samuel Evans2, Laura Pirazzoli3, Laura Goldberg1, Harriet Bowden-Howl1,4, Kimberley Coulson1,5, Sarah Lloyd-Fox3, Indie Beedie1, Mark H. Johnson3,6, Mairead MacSweeney1;1University College London, 2University of Westminster, 3Birkbeck - University of London, 4University of Plymouth, 5University of Hertfordshire, 6University of Cambridge

Adult neuroimaging studies robustly demonstrate that sign language is processed in a similar brain network as spoken language in adulthood (Capek et al., 2008; Emmorey, 2001; Hickok et al., 1996; MacSweeney et al., 2004; MacSweeney et al., 2008; Petitto et al., 2000). This is a strong argument for the idea that classical language areas of the left hemisphere are specialized for the processing of natural languages independent of their modality. In infancy, spoken language activates a similar network to the adult language network and activation of perisylvian areas are often observed to be larger in the left than right hemisphere (Dehaene-Lambertz et al., 2002; Minagawa-Kawai et al., 2010; Pena et al., 2003). The neural representation for sign language has never been studied in infancy and it is unclear how experience of different language modalities influence the neural substrate of language in infancy. The present study used functional Near Infrared Spectroscopy (fNIRS) to compare and contrast the neural representation of spoken and sign language in three groups of infants with different language experience. Data is presented from 60 infants between 4 and 8 months: 19 monolingual infants exposed to English exclusively, 20 unimodal bilingual infants who were frequently and regularly exposed to English and one or more additional spoken language(s), and 21 bimodal bilingual infants with a Deaf mother exposed to English and British Sign Language (BSL). FNIRS with 46 channels was used to measure brain activation while infants were presented with audiovisual videos of short stories in spoken or sign languages. Univariate analyses and multivariate pattern analyses (MVPA) were used to study the neural substrate of spoken and sign language in the three groups of infants. A support vector machine using a leave-one-participant-out cross validation and permutation testing was used to decode patterns of activation for each modality. In monolinguals, patterns of activation for spoken and sign language could be classified at a level greater than chance using left hemisphere channels (proportion correct = 0.68; p = 0.04), but not right hemisphere channels (proportion correct = 0.50; p = 0.71). In unimodal bilinguals, decoding revealed that spoken and sign language could be classified with an accuracy close to significance level based on all channels (proportion correct = 0.62; p = 0.09), but not based on either hemisphere alone (Left: proportion correct = 0.55; p = 0.40; Right: proportion correct = 0.57; p = 0.18). In bimodal bilinguals, MVPA could not classify spoken and sign language (All channels: proportion correct = 0.50; p = 0.54; Left: proportion correct = 0.57; p = 0.57; Right: proportion correct = 0.52; p = 0.45). Univariate analyses revealed group differences in the amplitude and lateralisation of brain activation to language. Bimodal bilinguals demonstrated reduced amplitude of their language activation, while unimodal bilinguals showed increased right lateralisation for language compared to monolinguals. These results suggest that early language experience influences the neural substrate of language. Language modalities were better distinguishable in groups of infants for whom one modality (sign language) was unfamiliar.

Topic Area: Language Development

Back