Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

What does the spontaneous speech synchronization task measure?

Poster D106 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port

J. Devin McAuley1, Bailey Rann1, Frank Dolecki1, Toni Smith1, Soo-Eun Chang2, Emily Garnett2; 1Michigan State University, 2University of Michigan

Assaneo and colleagues have developed a spontaneous speech synchronization (SSS) task that consists of participants whispering the syllable ‘tah’ while monitoring a series of synthesized syllables (Assaneo et al., 2019). For this task, some participants appear to spontaneously synchronize produced syllables with the timing of heard syllables, while others appear to produce syllable timings that are unaffected by the heard syllable rhythm, resulting in a bimodal distribution of phase-locking values (PLVs) between the amplitude envelopes of the produced and perceived syllable sequences. Two central claims about the SSS task are that it (1) distinguishes between individuals who, without conscious awareness, spontaneously synchronize their speech with a to-be-attended syllable stream from those who do not and (2) reflects a robust individual difference predictive of behavioral and neuroanatomical characteristics of speech and language processing. With respect to Claim 1, although the SSS task is framed as a syllable perception task where participants are not instructed to synchronize produced with heard speech, it is still possible that speech synchronization is conscious rather than unconscious. To test this possibility, forty-six participants, 18 – 43 years of age, completed the SSS task twice during separate visits followed by the administration of a modified version of the Perceived Awareness of Research Hypothesis (PARH) survey. The two key questions were a ‘yes/no’ response to the statement ‘I tried to synchronize’ and a 1 - strongly disagree to 7 - strongly agree rating of the statement ‘I tried to say tah in time with the sounds.’ Consistent with Assaneo and colleagues, there was a bimodal distribution of PLVs distinguishing ‘high’ synchronizers from ‘low’ synchronizers and PLVs showed high test-retest reliability, r(44) = 0.78, p < 0.001. Contrary to the claim that the SSS task measures unconscious synchronization, however, 60.9% of participants indicated that they tried to synchronize their produced speech with heard speech and PLVs were substantially greater for participants who indicated they tried to synchronize (M = 0.54, SD = 0.07) compared to those who did not (M = 0.28, SD = 0.06), p = 0.015. Moreover, when we used binary logistic regression to predict binary classification of ‘high’ and ‘low’ synchronizers based on only the two key questions overall classification accuracy was 76.1% (corresponding to a dˈ score of 1.42). In sum, the distinction between ‘high’ and ‘low’ synchronizers in the SSS task does not appear to emerge spontaneously without any listener intent to synchronize, but rather the bimodal distribution of PLVs on the SSS task reflects whether or not participants consciously try to synchronize their produced speech with heard speech.

Topic Areas: Methods, Multisensory or Sensorimotor Integration

SNL Account Login

Forgot Password?
Create an Account

News