You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.

Poster A5, Thursday, August 16, 10:15 am – 12:00 pm, Room 2000AB

Self-monitoring in L1 and L2 speech production: an MEG study

Sarah Bakst1, Caroline A. Niziolek1;1University of Wisconsin–Madison

We listen to ourselves while talking, comparing our acoustic output to an internal auditory representation of how our speech should sound. Because these representations of speech targets are weaker in a second language (L2), self-monitoring may be less successful, resulting in more varied, less native-like speech. In the current study, participants were recorded producing monosyllabic words in L1 (English) and L2 (French) during a magnetoencephalography (MEG) scan. The vowels tested were English {i, ɛ, æ} ("Eve", "eff", "add") and French {i, ɛ, œ} ("Yves", "hais", "oeuf"). Previous work (Niziolek et al. 2013) has shown that native speakers will use their auditory feedback to help them correct their speech in real time. Because speakers are sensitive to the natural acoustic variability in their productions, they can steer deviant productions towards their auditory targets while speaking. This corrective behavior is evident in the magnitude and direction of the trajectories of vowel formants over the course of an utterance. The speakers in the present study showed such corrective behavior while speaking L1, but not in L2: there was both increased acoustic variability and reduced corrective behavior for utterances in French. Further, the most variability and least corrective behavior was found for [œ], the only vowel which can not be mapped onto an English category. Unlike in L1, where increased acoustic variability is associated with increased corrective behavior, the increased acoustic variability in L2 vowels did not result in increased corrective behavior. These results indicate weakened auditory representations of speech targets in L2 and suggest that these weak representations impair the ability to self-correct one’s own productions. Learning a second language is also associated with structural differences in the brain; beginners show increased structural connectivity between hemispheres compared with monolinguals and more proficient bilinguals (Xiang et al. 2015). Here, we investigated functional differences while speaking and listening in L1 and L2. Neuroimaging studies have previously shown that the auditory cortical response to hearing one's own speech during L1 production is suppressed in comparison with silent listening to those same productions (Houde et al. 2002; Niziolek et al. 2013). During MEG recording, the production task described above alternated with a listening task in which participants heard acoustically-matched productions played over headphones. Preliminary analyses of MEG data show left auditory cortical suppression in both L1 and L2, providing evidence of self-monitoring in both languages. However, there was a reliable difference in laterality: while the cortical response to self-produced speech in L1 was highly left-lateralized, in L2 there was less lateralization in both speaking and listening. Our findings suggest that greater recruitment of right hemisphere occurs during both speaking and listening in adult L2 learners, which may be related to observed structural changes accompanying language learning.

Topic Area: Speech Motor Control and Sensorimotor Integration