Poster B11, Thursday, August 16, 3:05 – 4:50 pm, Room 2000AB
Neural processing of hyperarticulated speech in second language learners
Yang Zhang1, Keita Tanaka2, Yue Wang3, Dawn Behne4;1University of Minnesota, 2Tokyo Denki University, 3Simon Fraser University, 4Norwegian University of Science and Technology
It is a well-known fact that people adjust speaking style to accommodate the communicative needs of their listeners. For instance, the infant-directed speech is associated with higher pitch, expanded pitch contour and range, longer syllables, and hyperarticulation of vowels. While pitch exaggeration gets reduced in foreigner-directed speech, phonetic exaggeration in the formant patterns remains a dominant feature. Researchers argue that pitch exaggeration may serve to attract and maintain attention as well as convey and elicit positive affect whereas hyperarticulation directly facilitates phonetic and word learning by providing wider separation of phonetic categories and larger within-category variability. But the underlying brain mechanisms for these effects remain unclear. This magnetoencephalography (MEG) study investigated how formant exaggeration in hyperarticulated speech affects brain processing of speech in second language (L2) learners. The MEG data were collected from ten healthy adult listeners (ages 20-24) who studied English as a second language for at least eight years in school. The experiment was conducted with a 122-channel whole-head MEG system (Elekta-Neuromag) in a magnetically shielded room. The subjects were instructed to watch a DVD video and ignore the stimuli during MEG recording. The stimuli were randomly presented in blocks, including three computer-synthesized vowels, [i], [a], [u], in both exaggerated and non-exaggerated forms. The speech sounds were normalized in sound intensity and duration with identical pitch contours. The stimuli were delivered binaurally at a sensation level of 50 dB via a plastic ear insert with interstimulus interval randomized in the range of 900~1100 ms. Eye activities were monitored via two pairs of bipolar electrodes. Trials contaminated by EOG activities and other disturbances with MEG peak amplitudes larger than 3000 fT/cm were removed. The preprocessed MEG data were further analyzed using source estimation and time-frequency methods. Consistent with a previous ERP study on native speakers of English, the MEG data from second language learners showed stronger N1m responses for all the three formant-exaggerated vowels. Time-frequency analysis further indicated the importance of neural oscillatory activities including the beta band for coding the hyperarticulated speech. The MEG results provide strong evidence for an early effect of automatic processing of formant exaggeration in the auditory cortex independent of effortful listening, which may facilitate L2 speech perception and acquisition.
Topic Area: Perception: Speech Perception and Audiovisual Integration