Symposia

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Plasticity in a language-ready brain: complementary evidence from developmental deafness, blindness, and varied language experience across modalities

Thursday, October 6, 3:30 - 5:30 pm EDT, Regency Ballroom

Organizers: Marina Bedny1, Qi Cheng2; 1Johns Hopkins University, 2University of Washington
Chair: Marina Bedny, Johns Hopkins University
Presenters: Rain Bosworth, Qi Cheng, Karen Emmorey, Katarzyna Jednoróg, Marina Bedny

The neural basis of language is similar across spoken and signed languages, suggesting a ‘language-ready’ brain. How does experience modify language networks and their interactions with other systems? This symposium brings together research with individuals who are born blind or deaf, combining insights into plasticity from complementary methods. Dr. Bosworth presents insights from eye-tracking into how infants identify modality-invariant language patterns, prior to sign language experience, and how experience changes these abilities. Dr. Cheng combines diffusion tensor imaging and behavioral measures to investigate effects of early delays in language access on language network development. Dr. Emmorey uses time-sensitive ERP data to show how sign and spoken language experience shapes reading networks. Dr. Jednoróg and Dr. Bedny present novel fMRI data revealing effects of congenital blindness and Braille expertise on spoken and written language networks. The discussion will highlight novel insights into language-network plasticity of interest to the broad SNL community.

Presentations

The impact of early auditory or visual language experience on infants’ eye gaze behavior for linguistic patterns

Rain Bosworth1, So-One Hwang2, David P. Corina3; 1National Technical Institute for the Deaf, Rochester Institute of Technology, 2Center for Research in Language, University of California, San Diego, 3Center for Mind and Brain, University of California, Davis, California

Infants accomplish language acquisition with ease, first identifying novel sensory patterns relevant for learning language, aided by heuristics that propel them toward socially relevant communicative signals. This is true for infants exposed to spoken or signed language from birth. New evidence unveils how infants are able to distinguish signed linguistic patterns from non-linguistic ones. We present evidence from a series of eye tracking studies showing 6-month-olds-despite never seeing sign language-can discriminate well-formed from ill-formed signed patterns and signs from gestures, which suggest early language sensitivity transcends sensory modality of communication. Without exposure to visual language, however, this sensitivity wanes by 1-year of age. Finally, home language experience – signed vs. spoken – alters infants’ overt attention for linguistic vs. non-linguistic patterns very early. We conclude with understanding the amodal perceptual cues that are relevant for language learning, the linguistic specialization over time, and the emerging awareness of communicative gestural acts.

Processing and anatomical outcomes when early language input is insufficient: evidence from deaf individuals with early language deprivation

Qi Cheng1; 1University of Washington

Deaf individuals are more likely to have restricted early language environment due to the inaccessibility of spoken languages and the unavailability of a sign language. In this talk, I will discuss the processing and anatomical outcomes of lacking early language input. Deaf individuals who had severely delayed sign language onset relied more on world knowledge over American Sign Language (ASL) word order. Chinese deaf individuals with limited early language also showed less robust reliance on morpho-syntactic cues and more processing difficulties in written Chinese. Using diffusion tensor imaging (DTI), we found reduced white matter connectivity in left arcuate fasciculus among three deaf individuals with severely delayed ASL onset. Also, surface-based morphometry revealed negative correlations between age of ASL onset and cortical measurements in several bilateral frontal and posterior language regions among deaf individuals with ASL onset between birth to 14 years of age.

The neural circuit for reading in deaf adults reflects experience-specific adaptations

Karen Emmorey1, Katherine Midgley1, Phillip Holcomb1; 1San Diego State University

Skilled deaf readers provide a novel model for probing how the neural circuitry for reading adapts to distinct sensory and linguistic experiences. Early deafness alters the distribution of visual attention which can impact visual word processing, and weak phonological skills (due to reduced access to sound) can impact reading by increasing reliance on orthographic and semantic information. ERP data indicate early visual responses to words differ for skilled deaf vs. hearing readers (reduced P1 amplitude, reversed visual complexity effects, earlier frequency effects, more bilateral N170 component). Reading skill also modulates these components differently (N170 and P1 amplitudes are differently correlated with reading abilities for deaf and hearing readers). Further, deaf readers exhibit more pronounced concreteness effects and N400 effects for sentential violations than skill-matched hearing readers. These findings highlight the plasticity of the reading system and illuminate the neurocognitive adaptations that occur when deaf adults achieve reading success.

Speech-reading convergence in the blind

Katarzyna Jednoróg1; 1Nencki Institute of Experimental Biology

All writing systems represent units of spoken language, and reading relies on access to speech processing brain areas. Speech-reading convergence onto a common perisylvian network is a hallmark of acquiring literacy and is considered universal among different writing systems. Using fMRI, we tested whether this holds true also for tactile Braille reading in the blind. Even though both blind and sighted participants showed similar perisylvian specialization for speech, in contrast to the sighted, blind subjects did not engage these areas for reading. Speech-reading convergence in the blind was instead present in the left ventral occipitotemporal (vOT). The involvement of the vOT in speech processing and its engagement in reading in the blind suggests that vOT is included in a modality-independent language network in the blind, also evidenced by functional connectivity results. We find that in the blind, language responses in the vOT increase both with age and Braille reading skill.

Distinctive neural basis of reading by touch in congenitally blind proficient Braille readers: a parieto-occipital gradient

Marina Bedny1, Mengyu Tian1; 1Johns Hopkins University

The neural basis of reading is highly consistent across visual scripts. Reading recruits a left-lateralized, ventral-occipito-temporal (vOTC) posterior-to-anterior gradient, with more posterior responses to letter shapes and more anterior responses to words. How does reading modality and sensory experience shape language networks? We compared the neural basis of written and spoken word comprehension across sighted and congenitally blind readers using fMRI. Blind and sighted participants read analogous stimuli varying in linguistic complexity: words, consonant strings, and shapes and heard spoken words and noise. Unlike the vOTC of sighted readers, vOTC of congenitally blind readers did not show a gradient, instead uniformly preferring written and spoken words across its anterior/posterior extent. Congenitally blind Braille readers recruit a distinctive parieto-occipital anterior to posterior reading gradient. Blind readers also showed a distinctive pattern of lateralization consistent with a parieto-occipital hierarchy of Braille processing. Experience modifies the neural basis of spoken and written language.

No Talk 6