Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

When music is language: electroencephalography of a musical speech surrogate

Poster E68 in Poster Session E, Saturday, October 8, 3:15 - 5:00 pm EDT, Millennium Hall

Samantha Wray1, Laura McPherson1, Joseph Fausey1, Mamadou Diabate1, Yong Hoon Chung1, Kevin Ortego1, Viola Störmer1; 1Dartmouth College

A speech surrogate encodes language into a non-speech modality for transmission. The most widely used globally and cross-linguistically is writing, but others, including musical surrogates utilizing instruments exist as well. These feature flutes, “talking drums”, and many others (see Winter and McPherson (2022) for an overview). Experts in a speech surrogacy system can encode and decode messages using this unconventional modality. Despite the relatively well-understood relationship between literacy and language processing areas of cortex, the neural correlates of processing other speech surrogates are entirely unstudied due to the rarity of their practice. An exception is the Carreiras et al. (2005) fMRI study which found that fluent speakers of Silbo Gomero, a whistled speech surrogate from the Canary Islands which is non-musical and utilizes no instrument, recruited left hemisphere temporal lobe regions associated with language processing while listening to clips of whistled speech. The current study aims to start the research program investigating the processing of a musical surrogate of the Seenku language in Burkina Faso. Seenku can be encoded and played on a balafon: a West African xylophone constructed of resonator gourds beneath wooden keys struck by a mallet. This pilot experiment focused on a single subject who is an expert player of the balafon for speech surrogacy (N=1, male aged 49, right-handed). Four types of audio stimuli were included: (1) Seenku speech (2) balafon speech surrogacy (3) balafon “singing” (playing a melody with lyrics) (4) balafon playing devoid of any linguistic content. The Seenku speech condition was the most acoustically distinct to the naïve ear as it contained human speech, whereas the other three conditions contained sounds produced by the balafon. During EEG recording, the participant was positioned in front of a blank screen while auditory stimuli were randomly presented with a varying jitter for the interstimulus interval to prevent neural entrainment to the audio signal. The experiment duration totaled approximately 20 minutes. Data were recorded on a 32-channel BrainProducts system. Eye blink and other identifiable motor artefacts were removed manually. Epochs were selected from 200ms pre-stimulus to 800ms post-stimulus., and data were baseline corrected relative to the −200ms to 0ms pre-stimulus interval. Preliminary results comparing event-related potentials (ERP) associated with each stimulus type revealed that balafon playing music devoid of any linguistic content elicited strong frontal positivity, whereas the other three conditions elicited strong frontal negativity. Despite the Seenku speech condition being produced by a human speaker, these results suggest that for an expert, balafon speech is processed like human speech and not like balafon music. This effect held for multiple time windows analyzed: 100-200ms, 200-300ms and 300-400ms after stimulus onset. Additionally, single trial permutation analyses (threshold=0.01, FDR-corrected) of Fz underscored this frontal effect and showed between-condition results are significant in the 200-400ms time windows. Future work is planned to recruit additional expert balafon players in addition to two other participant types: expert musicians (without a speech surrogacy component), and those with no musical experience whatsoever.

Topic Areas: Perception: Auditory, Speech Perception