You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster B70, Wednesday, November 8, 3:00 – 4:15 pm, Harborview and Loch Raven Ballrooms

Dynamic Adaption During Lexically-Guided Perceptual Learning in People with Aphasia

David Saltzman1, Kathrin Rothermich1, Emily Myers1;1University of Connecticut

Adapting to unfamiliar speech input in everyday conversation is an integral part of speech processing, and is accomplished with seemingly minimal effort in typical populations. In Lexically-Guided Perceptual Learning (LGPL) paradigms (e.g. Kraljic & Samuel, 2005), listeners are exposed to an ambiguous token embedded in an unambiguous lexical context, which subsequently biases their perception of phonetic contrast. Phonetic recalibration through LGPL is associated with increased activity in known language areas (left MTG) as well as executive areas (left IFG, Myers & Mesite, 2014). This activation may reflect the interaction of the top-down lexical information with the bottom-up acoustic characteristics of the stimuli (Myers & Blumstein, 2008). What is not known, however, is how necessary each of these areas, specifically left frontal (MTG and IFG) and left STG, are for listeners to adapt to novel speech input. The dorsal stream, connecting posterior temporal regions with frontal systems, has been highlighted as a crucial route for language learning, but it remains to be seen whether the same route is utilized when listeners adapt to variance in their native language. In the present experiment, ten people with aphasia (PWA) were recruited to participate in an LGPL experiment in which they were exposed to four interleaved blocks of a lexical decision task and a phonetic categorization task in which the lexical decision blocks were designed to bias perception in opposite directions of a “s”-“sh” contrast. While traditional LGPL studies use between-subjects designs, this within-subjects paradigm allows for the degree of learning to be assessed by comparing the size of the boundary shifts incurred by the “s”-biasing and “sh”-biasing blocks. PWA also underwent a battery of cognitive assessments and tests of language function to be added as covariates. Preliminary data from ten participants suggests that while bottom-up processing of the “sign”-“shine” continuum tested in the phonetic categorization task appears intact (as indicated by generally good categorization response functions), adaptation as a result of the phonetically biasing information in the lexical decision task is highly variable, and overall weaker than in typical undergraduates. Voxel Lesion Symptom Mapping will be applied to better dissociate the brain regions that predict an individual participant’s ability to adapt to the novel speech input in this paradigm.

Topic Area: Perception: Speech Perception and Audiovisual Integration

Back to Poster Schedule