Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Localizing visual mismatch responses in American Sign Language (ASL) using MEG

Poster E19 in Poster Session E, Saturday, October 8, 3:15 - 5:00 pm EDT, Millennium Hall
This poster is part of the Sandbox Series.

Qi Cheng1, Christina Zhao1; 1University of Washington

Auditory mismatch responses (MMR) have been used in spoken language to examine automatic detections of linguistic anomalies/changes at phonetic, phonological, lexical, and morpho-syntactic levels (Pulvermüller & Shtyrov 2006). To date, very few studies have addressed similar questions for sign languages using visual MMR (vMMR). The only existing study on sign language vMMR used static images of Hong Kong Sign Language (HKSL) real-signs and non-signs in an oddball paradigm and found a larger vMMR by deaf signers than for hearing controls, but only for detecting real-signs, suggesting an early automatic lexical processing (Deng, Gu, & Tong 2020). They also noticed a potential topological difference for vMMR. Comparing the localization of lexical MMR effects in spoken and sign languages can provide further insights on cross-modal neural mechanisms during lexical access. In the current study, we aim to replicate the findings from HKSL in ASL and to further localize the vMMR responses using MEG. We identified one pair of highly frequent lexical signs (KID, with handshape=horn and location=nose; BOY, with handshape=flat-B and location=forehead) and switched the handshapes to create two non-signs (ns_BOY_horn, with handshape=horn and location=forehead; ns_KID_flatB, with handshape=flat-B and location=nose). We conducted a behavioral AX task with 5 proficient deaf signers and 5 hearing non-signers and found increased sensitivity to handshape changes (BOY- ns_BOY_horn pairs and KID- ns_KID_flatB pairs) by deaf signers (mean dprime=2.93, sd=0.6) as compared to hearing non-signers (mean dprime=2.05, sd=0.52). For the MEG study, we plan to include 15 deaf proficient signers and 15 age-matched hearing non-signers. We will examine the same lexical effect by adopting an oddball paradigm where deviants are interspersed within standards about 15% of the time. In each block, the standards and deviants constitute a lexical vs. non-lexical contrast by changing the handshape but not the location (e.g., standard: BOY, deviant: ns_BOY_horn). To ensure that the participants are processing the signs preattentively, the participants will be instructed to focus on the fixation cross at the center of the screen and to detect when the fixation cross has changed shape. The MEG analysis will undergo standard processing steps to minimize noise outside of the dewar, related to head movement as well as physiological artifacts (e.g., heart beat). Then, epochs for standard and deviant trials will be extracted and the vMMR will be calculated by subtracting standards from deviants. The source of the vMMR will be investigated by projecting the vMMR to a cortical surface. Specifically, we will first examine the vMMR in several a prior regions-of-interest (ROI), including occipital, posterior temporal, and inferior frontal regions. We anticipate replicating the effects that the deaf signers will show a larger vMMR for the lexical deviants when compared to the non-signers, and we hypothesize that this effect is most prominent in the language regions (posterior temporal and inferior frontal). Further, we will also conduct whole-brain level analysis to compare vMMR across groups in a more exploratory manner. This analysis will inform us of any other processes that might be contributing to the between-group differences.

Topic Areas: Signed Language and Gesture, Phonology and Phonological Working Memory