Slide Slam

< Slide Slam Sessions

Slide Slam P9 Sandbox Series

Establishing the link between speech to speech synchrony and general auditory-motor synchronization skills

Slide Slam Session P, Thursday, October 7, 2021, 2:30 - 4:30 pm PDT Log In to set Timezone

Cecilia Mares1, M. Florencia Assaneo1; 1Instituto de Neurobiología, Universidad Nacional Autónoma de México UNAM

Auditory-motor synchronization (AM-synch) is the ability to temporally align a train of motor gestures to a rhythmic auditory stimulus. In humans, it is an innate skill that has been shown to predict performance on different language-related tasks; across species, it has only been observed in vocal learners. In light of this link between AM-synch and speaking abilities, Assaneo and colleagues explored the phenomenon in the context of speech. They designed a behavioral protocol, the Spontaneous Speech Synchronization Test (SSS-test), in which participants are instructed to continuously repeat the syllable “tah” while concurrently listening to a rhythmic train of syllables. Using this simple test showed that the general population can be segregated into two groups: while some participants are compelled to spontaneously align the produced syllabic rate to the perceived one (high synchronizers), the rate of other participants is not modulated (low synchronizers). Strikingly, individuals classified as ‘high’ or ‘low’ synchronizers have structural and functional brain differences, with important consequences regarding speech processing and language learning skills. This initial work invites the following questions: where does the predictive power of the test come from? Is the bimodal distribution of the synchronization measurement a consequence of the speech motor gestures or the acoustic properties of the stimulus? To answer these questions, in the present study we evaluate the level of AM-synch for different motor gesture-stimulus combinations. Motor gestures, as well as the stimulus, can be speech-related (whispering “tah”/train of syllables) or speech-unrelated (clapping/train of tones). Participants completed eight synchronization blocks, two for each motor gesture-stimulus combination. On each block, participants were instructed to continuously repeat the motor gesture (whisper or clap) at the same rate as the auditory stimulus (train of syllables or train of tones) until the end of the stimulus. All stimuli lasted 1 minute, the rate started at 4.3 Hz (i.e., 4.3 syllables or tones per second) and it increased in 0.1 Hz every 10 seconds until it reached 4.7 Hz. Preliminary results show that, while the bimodal distribution is recovered for the clapping-to-syllables combination, it dissolves (most participants were able to synchronize) when the stimulus included tones, regardless of the motor gesture. This result shows that the previously reported ‘high’ vs. ‘low’ synchronizers segregation is a consequence of the acoustic features of the stimulus and is independent of the nature of the motor response. Additionally, individuals classified as low synchronizers during a first assessment with the SSS-test, show a significant increase of their synchronization abilities to the train of syllables, when the SSS-test is completed after the clapping to tones block. This suggests that synchronization abilities can be temporarily enhanced if previously entrained by a more efficient stimulus. Building upon these results, further work will be conducted to identify the precise acoustic characteristics that would grant the previously observed bimodal outcome in the synchronization test and to explore whether a temporal enhancement of synchronization abilities translates to better performance in language-related tasks.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News