My Account

Poster C61, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

A comprehensive evaluation of the Spontaneous Speech Synchronization phenomenon

Arianna Zuanazzi1, M. Florencia Assaneo1, Pablo Ripolles1, Joan Orpella1, David Poeppel1,2;1Department of Psychology, New York University, 2Neuroscience Department, Max Planck Institute for Empirical Aesthetics

Spontaneous synchronization of a motor output to an auditory input (i.e. without explicit training) is a basic trait present in humans from birth and has important cognitive implications. For instance, infants’ proficiency in following a beat is a predictor of language skills. From a phylogenetic perspective, spontaneous audio-motor synchronization is argued to be a unique characteristic of vocal learning species (e.g. parrots), including humans. Audio-motor synchrony in the context of speech perception/production remains largely unexplored. Here we evaluate the extent to which speech motor output synchronizes to speech auditory input. In a previous study, we designed a simple new behavioral task (Spontaneous Speech Synchronization Test, SSS-test) in which participants listened to a rhythmic train of syllables while concurrently whispering the syllable ‘tah’. Using this task, we found that some listeners are compelled to spontaneously align their own speech output to the input (high synchronizers), whereas others remain impervious to the external rhythm (low synchronizers). In this study, we assess whether the ability of the SSS-test to segregate the population into two different groups (i.e. a bimodal distribution) depends on the specific set of parameters previously employed (specifically, fixed 4.5 Hz syllable rate and implicit task instructions). We tested two different variations of the SSS-test: in experiment 1, the syllable rate was fixed at 4.5 Hz but participants were explicitly instructed to synchronize to the external rhythm (i.e., fixed rate 4.5Hz and explicit); in experiment 2, task instructions were the same as in experiment 1 but the syllable rate was increased from 4.3 to 4.7Hz during the task (i.e., accelerated and explicit). The results of experiments 2 replicated the findings of the previous study, showing a bimodal (high versus low synchronizers) outcome. Surprisingly, this result was not replicated in the fixed rate 4.5Hz and explicit condition (experiment 1). We suggest that these results, together with previous behavioral and neural findings, can be parsimoniously explained by a simple neural model that assumes that the motor cortex behaves as a phase oscillator receiving an input signal coming from auditory areas. This study illustrates the suitability of the SSS-test as an experimental tool for a deeper understanding of speech-to-speech synchronization. Furthermore, a parsimonious neural model accounts for the phenomenon.

Themes: Speech Motor Control, Multisensory or Sensorimotor Integration
Method: Behavioral

Back