You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.


Poster A16, Thursday, August 16, 10:15 am – 12:00 pm, Room 2000AB

Audiovisual facilitation in speech depends on musical background – An ERP study.

Marzieh Sorati1, Dawn Marie Behne1;1Norwegian University of Science and Technology (NTNU)

Audiovisual information facilitates perception in comparison to an audio-only condition. Visual information accompanying sound, such as corresponding mouth movements while articulating a syllable, adds an anticipatory effect which can facilitate audiovisual perception. Although this audiovisual facilitation has been tested in groups with different ages, emotions, languages and with different speech and non-speech stimuli, yet unclear is whether musical training enhances audiovisual facilitation. While extensive experience through musical training can increase neural plasticity and improve auditory perception, an unattended question is whether this improvement is specific to unimodal auditory perception or transfers to audiovisual perception. The current study addresses differences between musicians and non-musicians in audiovisual facilitation in speech due to the addition of visual information. ERP data were collected from 12 participants (six musicians and six non-musicians) in response to the syllable /ba/ in audio-only (AO), audiovisual (AV) and video-only (VO) conditions. In the audio-only condition, analyses of N1-P2 latencies and amplitudes showed that musicians have an improved N1 latency compared to non-musicians. These results are consistent with previous ERP research indicating that musicians have improved auditory perception. Next, to isolate the anticipatory effect of visible mouth movements, the ERP waveform from the video-only condition was subtracted from the audiovisual condition (AV-VO) and compared to the corresponding audio-only ERP waves. An analysis of variance with musical background and condition (AV-VO vs. AO) as factors showed a tendency for an interaction with a large effect size for N1 latency; the difference between AV-VO and AO was therefore further analyzed for each group. Consistent with other recent studies, non-musicians showed facilitation in N1 latency. Notably, musicians showed no significant difference in N1 latency when comparing the AV-VO difference wave and AO condition. These findings imply that AV facilitation is affected by the musical background of the participants. The expected finding, also observed here for non-musicians, is for greater facilitation for the AV-VO than AO condition. However, this was not found for musicians, for whom N1 latencies were already short relative to the non-musicians and were not further reduced in the AV-VO difference wave. Whereas non-musicians, who have relatively late unimodal N1 latencies have AV facilitation, the unimodal audio facilitation that comes with a musical background is not further facilitated for audiovisual speech, suggesting that the extent to which N1 latency for audio perception is facilitated may have limitations.

Topic Area: Perception: Speech Perception and Audiovisual Integration

Back