Slide Slam

< Slide Slam Sessions

Slide Slam O6

Neural decoding reveals representations of perceptual category and perceptual ambiguity during speech perception

Slide Slam Session O, Thursday, October 7, 2021, 2:30 - 4:30 pm PDT Log In to set Timezone

Sara D. Beach1,2, Ola Ozernov-Palchik2, Sidney C. May2, Tracy M. Centanni2, John D. E. Gabrieli2, Dimitrios Pantazis2; 1Harvard University, 2Massachusetts Institute of Technology

Robust and efficient speech perception relies on interpreting acoustically-variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which acoustic-phonetic detail persists over time as categorical representations arise. We hypothesized that this might depend on task demands, such that overt categorization, in comparison to passive listening, would attenuate the representation of within-category detail over time. We addressed this question by using time-resolved neural decoding to quantify the (dis)similarity of brain response patterns evoked by the same stimuli presented during two different tasks (Beach et al., 2021, Neurobiology of Language). We recorded magnetoencephalography (MEG) from 24 adults during exposure to 40 tokens each of 10 steps of an acoustic continuum ranging from /ba/ to /da/, presented in pseudorandom order. In the passive task, participants performed visual target detection to maintain arousal but were told they could ignore the sounds. In the active task, participants labeled each stimulus as either "ba" or "da" via counterbalanced and delayed button-press. We performed cross-validated classification of the MEG data using linear support vector machines. Classifiers were trained to distinguish the perceptual label as applied by the participant (binary) as well as stimulus identity (pairwise). Perception of "ba" vs. "da" was successfully decoded from the MEG data. Left-hemisphere data were sufficient for decoding the percept early in the trial, while right-hemisphere data were necessary but not sufficient for decoding at later time points. Stimulus representations were maintained longer in the active task than in the passive task, perhaps due to decision-related processing. However, contrary to predictions, we did not observe a loss of within-category detail when an overt categorical response was required. Instead, in both tasks, a representation of perceptual ambiguity (that distinguished endpoint from middle tokens) dominated the second half of the trial. These results suggest that the speech-sound categorization process does not require the loss of within-category detail. Results are discussed in the context of theories of perceptual decision-making and in relation to models of speech perception and spoken-word recognition that highlight the utility of within-category detail.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News