Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Exploring Neural Dynamics of Speech and Melody Processing in the Human Brain

Poster C86 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port

Akanksha Gupta1, Agnès Trébuchon1,2, Benjamin Morillon1; 1INS, INSERM, Aix-Marseille University, 2APHM, Hôpital de la Timone, Service de Neurophysiologie Clinique, Marseille, France

The processing of speech and music in the human brain exhibits asymmetry, with auditory temporal modulations predominantly processed in the left hemisphere and auditory spectral modulations in the right hemisphere. However, the precise neural dynamics underlying this lateralization and the encoding of acoustic features related to speech and music processing remain largely unexplored. To investigate this, we recorded intracranial EEG data from fourteen epileptic patients with auditory region implants. Our stimulus set contained one hundred cappella songs, combining ten French sentences with ten melodies. Each song had a temporally degraded, spectrally degraded, and undegraded (original) version resulting in a comprehensive set of three hundred stimuli for analysis. To explore the connection between behavioral responses and neural activity, participants first completed a binary choice task, where they were presented with pairs of excerpts and determined if they were identical or different. Subsequently, participants underwent a passive listening phase while watching a silent documentary. By employing multivariate pattern analysis and time-frequency analysis, we trained a classifier to differentiate between sentences and melodies and examined the encoding of temporal and spectral modulations. The behavioral results demonstrated reduced sentence recognition in temporally degraded conditions and reduced melody recognition in spectrally degraded conditions. Consistent with behavioral outcomes, the decoding accuracies revealed that speech processing primarily relies on temporal modulations, whereas melody processing predominantly depends on spectral modulations. Notably, these decoding patterns were consistently observed across time and channels, suggesting the presence of a spatiotemporal code in the auditory system. Furthermore, our findings indicate that different frequency bands play a role in encoding the temporal and spectral cues involved in speech and melody processing. In future investigations, we aim to extend these findings by directly examining the functions of various oscillatory networks in speech and melody processing. Moreover, by exploring multiple dimensions (time, channels, and frequencies), we strive for a comprehensive account of how the human brain processes speech and melody.

Topic Areas: Speech Perception,

SNL Account Login

Forgot Password?
Create an Account

News