You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster D17, Thursday, November 9, 6:15 – 7:30 pm, Harborview and Loch Raven Ballrooms

Rhythm sensitivity assists in overcoming acoustic and syntactic challenges during speech listening

Sanghoon Ahn1, Ian Goldthwaite1, Kate Corbeil1, Allison Bryer1, Korrin Perry1, Aiesha PolaKampalli1, Katherine Miller1, Rachael Holt1, Yune Lee1,2;1The Ohio State University, 2Center for Brain Injury, The Ohio State University

A growing body of evidence has indicated connections between speech, language, and music. In particular, rhythm processing has been implicated as important in studies measuring various aspects of speech and language proficiencies (e.g., reading, speaking, and listening). Here, we investigated how rhythm sensitivity influences spoken sentence recognition under both sensory (e.g., impoverished acoustic quality) and cognitive (e.g., complex syntactic structure) challenges. Seventy-eight children (age range: 7-17 yrs; mean age: 11.4; 39 females) were recruited through The Ohio State University’s Language Pod located at the Center of Science and Industry. All children were native English speakers with normally developed speech, hearing and language abilities, per parent report. Children were administered two tests (each took approximately 10 minutes). First, in the speech/language test, children listened to short spoken sentences that simultaneously varied in their acoustic (clear vs. 15-channel vocoded speech) and linguistic (subject- or object-relative embedded clause) structure. For each sentence, children were asked to indicate the gender of the agent performing the action via button-press. For example, children were instructed to press the “male” button for the sentence, “Boys that kiss girls are happy.” Second, in the music test, children were presented with a pair of short rhythm sequences consisting of either 6 or 7 intervals and were to determine if they were the same or different. Half of the pairs contained the same rhythmic patterns, and the other half contained different patterns (the number of intervals were matched in each pair). To identify which factors accounted for performance in the speech/language test, we ran a linear mixed effect (LME) regression analysis in which the fixed effects included children’s rhythm test score, age, gender, music training period, language environment (e.g., bilingual), parents’ education, syntax (subject/object) and acoustic (clear/15ch) and the random effect included subjects. The LME revealed that music test (p = 0.002103), age (p = 3.662e-06), syntax (p = 2.2e-16), and acoustic difference (p = .000726) significantly predicted speech/language performance. Together, we found that rhythm sensitivity helps children better cope with sensory and cognitive challenges that are both simultaneously and independently present in spoken sentences. By controlling for potentially confounding variables (e.g., music training background, age, language environment, etc.), we showed that better performance in the speech/language test was independently driven by rhythmic sensitivity. The present behavioral data lay the groundwork for examining genetic and neural connections between speech, language, and music processes.

Topic Area: Language Development

Back to Poster Schedule