Slide Slam F8
Neural Decoding of Concurrent Speech: Lessons from Selective and Divided Attention
Maya Kaufman1, Keren Guezentsvey1, Elana Zion Golumbic1; 1The Multidisciplinary Brain Research Center, Bar-Ilan University
It is widely assumed that when individuals apply selective attention to a single speaker in a Cocktail Party setting, other so-called “task-irrelevant” speakers are ignored. However, the extent to which task-irrelevant speech is processed and the nature of its neural representation remains a highly debated topic. This ambiguity is further extended when considering how attention might operate under naturalistic conditions, where it might not always be beneficial to fully tune-out task-irrelevant speech. Some have suggested that natural listening in multi-speaker contexts might involve dynamic attention-switching between relevant and irrelevant speakers, rather than exclusive attention only to the relevant speaker. However, this hypothesis is extremely difficult to test experimentally, given the limited empirical access to the listener's internal state in selective attention paradigms. To circumvent this methodological challenge, here we explicitly manipulated the task-relevance of concurrently presented speech and examined how this affects its neural encoding in the brain. Specifically, we compared neural speech-tracking of two natural speech streams when only one was task-relevant (Selective Attention) vs. when both were task-relevant (Divided Attention). We recorded the magnetoencephalographic (MEG) response from 27 Hebrew native speakers, while they listened to two concurrent speakers telling short personal narratives (dichotic presentation). Before each trial, participants were either instructed to attend to one speaker (Selective Attention condition) or to both speakers (Divided Attention condition). After each trial, participants answered comprehension questions either about the pre-designated attended narrative (Selective) or about both narratives (Divided). Speech-tracking analysis was performed, allowing us to compare the neural representation of the acoustic envelope of both speakers. Subjects performed significantly better on the Selective Attention task (~89%) compared to the Divided Attention task (~72%), indicating that there is a behavioral cost to dividing attention. Speech-tracking analysis of the neural response to the two speakers replicated previous results of Selective Attention, with reduced responses to the task-irrelevant speaker vs. the task-relevant speaker in auditory regions. Interestingly, the neural representation of the two speakers in the Divided Attention condition was comparable to that of the task-irrelevant speaker in the Selective Attention condition. This pattern suggests that the partial encoding of task-irrelevant speech is akin to the encoding applied when attempting to divide attention among two speakers. We further characterize the neural results in terms of the degree of selectivity between the speakers and representation levels of both speakers, under these two attentional tasks. By contrasting these two opposing attention regimes, our results broaden the ongoing conversation about the system’s capacity and limitations for processing two speech inputs, and the dynamics of attention in ecologically valid settings.