Slide Slam

< Slide Slam Sessions

Slide Slam R14

Multivariate decoding of neural representations for verbal working memory

Slide Slam Session R, Friday, October 8, 2021, 12:00 - 2:30 pm PDT Log In to set Timezone

Bradley Buchsbaum1,2, Jessica Mcquiggan1,2; 1Rotman Research Institute, 2University of Toronto

The nature of the representational code underlying verbal working memory rehearsal has long been debated. Arguments for the primary importance of "acoustic", "articulatory", or "phonological" codes for verbal working memory have variously been advanced (e.g. Wickelgren 1965; Hintzman, 1967; Baddeley, 1992). With the advent of cognitive neuroscience and neuroimaging, the brain areas supporting verbal working memory were identified, often with the tacit assumption that neural activity identified during memory rehearsal was, somewhat generically, “phonological”. In the current study we attempted to examine both the “where” and “what” of verbal working memory representations by using a multivariate decoding approach with functional magnetic resonance imaging (fMRI). In this study, 18 participants performed three tasks in separate scanning sessions. In one of the first two sessions, subjects performed a passive auditory listening task in which they were repeatedly presented with 12 CV syllables consisting of the consonants /b/, /d/, /p/, and /t/ crossed with the vowels/a/, /i/, and /u/. In the other of the first two sessions, participants performed an articulation task, in which, when presented with the written form of one of the above syllables (e.g. “ba”), they were asked to silently mouth the cued syllable four consecutive times in synchrony with a flashing dot. These two fMRI datasets were then used to train a series of multivariate pattern classifiers that could discriminate between the set of 12 syllables. On the third scan, subjects performed a simple verbal working memory task in which they were presented auditorily with two syllables in succession (500 ms ISI). After a 1000 ms delay, they received a retro-cue in the form of a circle appearing on either the left or right side of the screen. If the circle appeared on the left, the task was to rehearse the syllable that was just presented in the left ear and vice versa for the right. After a 10 s delay, participants were prompted to overtly recall the cued syllable and were recorded using an optical microphone installed in the scanner. Data analysis focused on the 10 s delay interval intervening between stimulus encoding and recall. Using a multivariate searchlight approach with the classifier trained on the auditory perception data, we could successfully decode the cued syllable in the mid portion of the superior temporal gyrus (STG), bilaterally. However, we also found that using the classifier trained on the “silent mouthing” data, that we could also the decode cued syllable in the STG--with even better accuracy than for the auditory classifier. This suggests that the neural representations identified in the STG are likely the result of motor-to-sensory feedback that arises during subvocal rehearsal, rather than as a frank “acoustic-sensory” representation. In addition, the regions of the STG that showed significant syllable classification did not tend to show sustained univariate activity during the delay, and thus that these neural signals would have been “missed” by more conventional approaches.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News