Slide Slam

< Slide Slam Sessions

Slide Slam C17 Sandbox Series

Divide and Conquer: Own-Name detection during ecological concurrent speech processing

Slide Slam Session C, Tuesday, October 5, 2021, 12:30 - 3:00 pm PDT Log In to set Timezone

DANNA PINTO1, MAYA KAUFMAN1, ELANA ZION-GOLUMBIC1; 1Bar Ilan University

A well-known phenomenon is the experience of noticing one’s own name spoken by an otherwise unattended speaker. This “Cocktail Party Effect” has long fueled debates regarding the extent to which so-called “unattended” speech is processed. However, it is difficult to reliably gauge the prevalence of this effect since most studies use selective attention tasks, where detection of unattended words is assessed only indirectly. For example, using subject-report measures, only ~33% of participants report noticing their name. This critically limits the ability to draw theoretical inferences regarding the depth of processing applied to concurrent speech. To circumvent this methodological limitation, here we address this question from a different perspective by employing a Divided attention task. We ask two specific questions: What is the capacity for detecting individual words from background speech, while listeners are primarily engaged in processing a natural speech narrative? And, does detecting one’s own name enjoy a unique status? Participants were presented with two concurrent speech streams: A Narrative stream, which was a natural recording of continuous speech, and a Starbucks stream, that simulated a barista calling out orders, including names and food items. Participants were instructed to listen to the Narrative and answer subsequent comprehension questions. In addition, they were also required to monitor the Starbucks stream and detect a specific target-name, which could be either their own name or a different control name. Neural activity was measured using EEG during the task, as well as changes in skin-conductance levels (GSR). Accuracy in both tasks was high, with target-detection rates near ceiling (>90%). Moreover, there were no apparent behavioral tradeoffs between performance on the two tasks. This suggests that individuals have a relatively good capacity for monitoring background speech and detecting specific words, without compromising comprehension of a main narrative. When comparing detection rates of the two different targets, performance was significantly better for detecting one’s own name vs. a control name. This effect was accompanied by increased neural responses (in a time-window corresponding to the P600 ERP-component) as well as heightened GSR responses to one’s own name. These results provide new perspectives regarding the capacity for processing concurrent speech and indicate that it is possible to monitor background speech for specific information while also fully processing continuous speech. This is in line with proposed ‘late-selection’ models and invites fine-tuning of theoretical accounts of ‘bottlenecks’ for processing concurrent speech. In addition, the clear advantage for detecting one’s own name is in line with previous findings, indicating an advantage for processing personally-meaningful information in noisy, multi-speaker contexts.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News