Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Cross-modal effects of pseudo-sign articulation (overt and covert) on the premotor cortex: an adaptation fMRI study

Poster B114 in Poster Session B, Tuesday, October 24, 3:30 - 5:15 pm CEST, Espace Vieux-Port

Stephen McCullough1, Karen Emmorey1; 1SDSU

Tian and Poeppel (2013) found that both overt and covert speech enhanced activation in auditory cortex: overt speaking and imagined speaking both increased the neural response to the same (subsequently presented) auditory syllable. In contrast, auditory imagery (imagine hearing the syllable) and actually hearing the syllable both suppressed the neural response to the heard syllable probe (a repetition priming effect). We investigated whether similar effects occur in visual-manual language: American Sign Language (ASL) with 13 deaf participants. We created grayscale videos of a right hand producing eight different pseudosign syllables (probes) and 24 scrambled videos (adaptors) with a transparent square cue (gray for overt; black for covert production) and one of eight different pictographs, corresponding to each pseudosign. Prior to scanning, deaf signers learned the association between each pseudosign and pictograph. Participants also learned to articulate the pseudosigns overtly or covertly, depending on the color of square cue. The fMRI study consisted of four event-related fMRI adaptation scans and two blocked-design functional localizers. The localizer scans always followed the adaptation scans. The first localizer identified the cortical regions involved in viewing hand and foot motor actions (VIEW-LOC). The stimuli for VIEW-LOC consisted of a randomized order of 20s blocks showing videos of pseudosigns, foot motor actions, and scrambled videos without the square cue. We instructed participants to pay attention to the stimuli during the run. The second localizer identified the regions involved in producing hand motor actions (PROD-LOC). The PROD-LOC stimuli consisted of a randomized order of 20s hand or foot blocks showing scrambled videos with pictograph cues, and participants produced either hand or foot motor actions corresponding to the pictographs (learned prior to scanning). For the event-related fMRI adaptation scans, participants viewed a total of 256 trials of video pairs (2s each) separated by 1s. The first video (adaptor) was always selected randomly from four categories: overt articulation, covert articulation, visual imagery, or visual presentation. The second probe was always a video of a pseudosign that was either the same pseudosign or a different pseudosign from the adaptor stimulus. We use the neural activation clusters identified in both VIEW-LOC and PROD-LOC as areas of investigation for our whole-brain analysis of BOLD responses acquired from the adaptation scans. Surprisingly, the localizers did not reveal any overlap in the brain regions for viewing and producing pseudosigns. The whole-brain analysis (p =.01) of neural adaptation showed strong adaptation effects in the supplementary motor area (SMA) for the overt and covert articulation, and for visual imagery adaptors, but not for the visual adaptor. SMA was also localized using the hand condition of the production localizer. Moreover, the production localizer identified hand regions (precentral gyrus and middle frontal gyrus) that also overlapped with the clusters of adaptation effects for overt and covert articulation. Of all adaptation effects observed, the ones from covert articulation were the strongest and most widespread. Overall, our results reveal distinct patterns of neural adaptation for internally versus externally generated signing compared to those found for speech.

Topic Areas: Signed Language and Gesture, Multisensory or Sensorimotor Integration

SNL Account Login

Forgot Password?
Create an Account