My Account

Poster C39, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

Neural correlates for comprehending spatial language in American Sign Language and English

Karen Emmorey1, Stephen McCullough1, Christopher Brozdowski1;1San Diego State University

In American Sign Language (ASL) spatial relationships are conveyed by the location of the hands in space. To express “The candle is on the box,” a 1-handshape representing the candle is positioned on top of a flat handshape representing the box. To understand perspective-dependent expressions (e.g., “The candle is to the right of the ball”), a 180° mental transformation is required for face-to-face signing. In contrast, English expresses spatial relationships with prepositional phrases, and no linguistic spatial transformation is required. Previous research has shown that the production of spatial language differs for ASL and English, with greater involvement of bilateral superior parietal cortex for ASL (Emmorey et al., 2002; 2005; 2013). We investigated whether the neural regions involved in the comprehension of spatial language differ for ASL signers and English speakers. In an event-related fMRI experiment, 14 deaf ASL signers and 14 hearing English speakers viewed ASL or audio-visual English descriptions of either a perspective-independent relation (in, on, below, above) or a perspective-dependent relation (left, right, behind, in front of) between two objects. The control condition was non-spatial descriptions of the colors of two objects (e.g., “The candle is blue and the ball is red”). After 20% of trials, a picture of two colored objects was presented that either matched or mismatched the spatial configuration or the colors described in the preceding sentence. Two 6-minute scans with 24 trials in each condition (perspective-dependent; perspective-independent) were presented. Trials consisted of an ASL 4 second video/English 3 second video, 2 second fixation ISI, and a picture (2 seconds; 20% of trials), followed by variable fixation periods (2 – 10 seconds). Accuracy and response times for the sentence-picture matching task did not differ between signers and speakers. In contrast to the non-spatial control, perspective-dependent expressions engaged the superior parietal lobule (SPL) bilaterally for both ASL and English. This result is consistent with Condor et al., (2017) who reported bilateral SPL activation during comprehension of English spatial expressions using a similar experimental design. For perspective-independent expressions, activation in SPL was more right-lateralized for ASL and more left-lateralized for English. Right parietal regions may support the required visual-spatial mapping in ASL between the position of the hands in signing space and a mental representation of the location of referent objects. The direct contrast between ASL spatial expressions revealed greater SPL activation for perspective-dependent expressions, while the direct contrast for English revealed no difference in activation between spatial expression types. Increased SPL activation for perspective-dependent expressions in ASL may reflect the cognitively demanding 180° mental transformation required to understand these expressions (Brozdowski et al., 2019). Overall, the results suggest both overlapping and distinct neural regions support spatial language comprehension in ASL and English.

Themes: Signed Language and Gesture, Meaning: Lexical Semantics
Method: Functional Imaging

Back