My Account

Poster D85, Wednesday, August 21, 2019, 5:15 – 7:00 pm, Restaurant Hall

Similarities and differences in the cortical processing of melodies in speech and music

Mathias Scharinger1,2, Valentin Wagner2, Christine A. Knoop2, Daniela van Hinsberg2, Winfried Menninghaus2;1Philipps University Marburg, 2Max Planck Institute for Empirical Aesthetics

Recent work in the neurosciences suggests that music and speech share neural bases for certain levels of processing. Speech prosody, that is, stress and intonation patterns, has been found to recruit right-temporal processing areas in close vicinity to primary auditory regions. Deepening our previous study on speech melody in poems, we here address a direct comparison of spoken poems and their musical settings from a neurobiological perspective. We hypothesize that the recurrence structure of syllable pitches and durations as determined by autocorrelation analyses will modulate brain activity in areas dedicated to musical processing. We are furthermore interested in brain regions that support the aesthetic evaluation of speech and music. For this purpose, 42 participants listened to randomly presented spoken poems and their sung musical settings while they were laying in a 3-T Magnetic Resonance Tomography (MRT) scanner. Importantly, in order to exclude voice-specific effects, the poems and their musical settings were recorded by the same professional speaker and singer. During the echo-planar imaging (EPI) sequence, participants provided continuous liking ratings on a MRT-compatible pressure sensors with their left and right index fingers. Our functional magnetic imaging (fMRI) analyses first compared overall activations of speech (poems) vs. music (musical settings) and included parametric modulations of the blood oxygenation level dependent (BOLD) response by continuous liking ratings and autocorrelations of syllable pitches and durations derived from both speech and music recordings. Our results show that poems (compared to musical settings) elicited activity in posterior parts of the mid-temporal gyrus, extending into the superior temporal sulcus in the left hemisphere, whereas musical settings elicited activity in the middle part of superior temporal gyrus, involving Heschl’s gyrus, in the right hemisphere. Second, continuous liking ratings modulated brain activity in posterior parts of the right middle temporal gyrus for poems and in right supramarginal gyrus for musical settings. Finally, autocorrelations of syllable pitch and durations for poems correlated with brain activity neighboring on and overlapping with regions of musical processing in middle parts of the right superior temporal gyrus while autocorrelations of syllable duration for musical settings covaried with activation in left superior temporal gyrus in vicinity to speech-dedicated processing areas. These results imply several similarities and differences of processing speech and music: First, speech melody as analyzed in terms of the similarity of recurrent pitch contours by pitch recurrences recruits areas that support music perception, while rhythmic aspects of music as approximated by duration recurrences recruit areas that support speech perception. These findings are in line with hemisphere specializations for acoustic information on fast (left) and slower (right) time scales. Differences between speech and music were seen in the regions modulated by aesthetic evaluations: For speech, these regions were in vicinity to those modulated by pitch structure, whereas for music, they were further apart and involved regions commonly associate with pitch memory.

Themes: Speech Perception, Prosody
Method: Functional Imaging

Back