You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster C28, Thursday, November 9, 10:00 – 11:15 am, Harborview and Loch Raven Ballrooms

Atypical phonemic discrimination but not audiovisual speech integration in children with the broader autism phenotype, autism, and speech sound disorder.

Julia Irwin1,3, Trey Avery1, Jacqueline Turcios1,3, Lawrence Brancazio1,3, Barbara Cook3, Nicole Landi1,2;1Haskins Laboratories, 2University of Connecticut, 3Southern Connecticut State University

When a speaker talks, the consequences of this can be heard (audio) and seen (visual). Visual information about speech has been shown to influence what listeners hear, both in noisy environments (known as visual gain) and when the auditory portion of the speech signal can be clearly heard (mismatched audiovisual speech demonstrates a visual influence in clear listening conditions, known as the McGurk effect). This influence of visible speech on hearing has been demonstrated in infancy; further, typical speech and language development is thought to take place in this audiovisual (AV) context, fostering native language acquisition. Individuals with autism spectrum disorder (ASD) and speech sound disorder (SSD) display marked deficits in communicative behavior, however those with SSD have primary problems in speech production and those with ASD have broader communicative deficits. Several studies from our lab and others have observed atypical AV speech processing in ASD, however limited work has examined this important aspect of communication in those with SSD. We use a novel visual phonemic restoration task to assess behavioral discrimination and neural signatures (using event related potentials or ERPs) of audiovisual speech processing in typically developing children with a range of social and communicative skill as well as children with ASD and children with SSD. Using an auditory oddball design, we presented two types of stimuli to the listener, a clear exemplar of an auditory consonant-vowel syllable /ba/ and a syllable in which the auditory cues for the consonant was substantially weakened, creating a stimulus which is more like /a/. All speech tokens were paired with either a face producing /ba/ or a face with a pixelated mouth, effectively masking visual speech articulation. In this paradigm, the visual /ba/ should lead to the auditory /a/ to be perceived as /ba/ (a phonemic restoration effect), creating an attenuated oddball response, but the pixelated video should not have this effect. Overall, we observed behavioral and ERP effects that are consistent with phonemic restoration across all groups (smaller P300 effects in the presence of face producing /ba/). However, participants diagnosed with ASD or SSD showed overall reductions in phonemic discrimination (reduced P300 effects), regardless of face context (audiovisual or pixelated mouth), suggesting that these developmental disorders are associated with impairments in speech processing but not AV speech integration per se.

Topic Area: Language Disorders

Back to Poster Schedule