My Account

Poster E66, Thursday, August 22, 2019, 3:45 – 5:30 pm, Restaurant Hall

A Lexicon in the Anterior Auditory Ventral Stream: Preliminary evidence from an fMRI-RA Study

Srikanth Damera1, James Mattei1, Laurie Glezer2, Patrick Cox3, Xiong Jiang1, Josef Rauschecker1, Maximilian Riesenhuber1;1Georgetown University Medical Center, 2San Diego State University, 3George Washington University

The auditory system, like the visual system, is thought to be organized following a dual-stream architecture. Under this framework, the auditory ventral stream known as the “what” pathway is specialized for recognizing auditory “objects” including spoken words. Analogous work in in the visual system has shown that visual word recognition proceeds along a simple-to-complex hierarchy in which words are first represented via simple visual features in early visual cortex, and then by increasingly complex features along the ventral stream. This process culminates in a visual lexicon thought to be located in the posterior fusiform cortex (Glezer et al., 2009, 2015, 2016). Despite growing evidence that the auditory system is similarly organized along a simple-to-complex hierarchy, it is still unknown if, and, if so, where a putative auditory lexicon might exist. We tested the hypothesis of an auditory lexicon in the anterior superior temporal gyrus (aSTG) using an fMRI rapid adaptation (fMRI-RA) experiment inspired by our aforementioned visual work. In fMRI-RA, two stimuli are presented in quick succession in each trial, and the BOLD-contrast response to the pair is taken to reflect similarity of the neuronal activation patterns corresponding to the two individual stimuli, with the lowest response for two stimuli activating identical neuronal populations, and maximum signal if the two stimuli activate disjoint groups of neurons. In the present study, subjects performed two such fMRI-RA experiments. In Experiment 1, subjects (N=8 so far) heard pairs of real English spoken words on every trial while performing a phoneme oddball detection task. The words in a pair were either identical (SAME), differed by a single phoneme (1PH), or shared no phonemes at all (DIFF). We investigated the adaptation effect in subject-specific left anterior and middle superior temporal gyrus ROI (a/mSTG). These ROIs were identified in an independent localizer scan and chosen to be as close as possible to putative aSTG and mSTG word- and phoneme-selective foci, respectively, reported in a recent meta-analysis (DeWitt and Rauschecker, 2012). We found that the left aSTG (but not the mSTG) ROI showed a significant adaptation effect when comparing SAME and 1PH (p=.0167) as well as SAME and DIFF (p=.0108), but not when comparing DIFF and 1PH (p > 0.05), compatible with an auditory lexicon in which neurons are tightly tuned to individual real words. In Experiment 2, subjects (N=7 so far) heard pairs of spoken pseudowords on every trial while performing a phoneme oddball detection task. Pseudowords in a pair were either identical, differed by only a single phoneme, or shared no phonemes at all. As these pseudowords were unfamiliar to the subject, we hypothesized that aSTG regions corresponding to an auditory lexicon would not show a selective representation for these novel words, but instead (analogous to our visual work) show a graded release from adaptation corresponding to the amount of phonemic overlap between the words. The results so far show a trend in this direction. In sum, the current study provides preliminary evidence for an auditory lexicon in the aSTG. FUNDING SOURCES: NSF grant (BCS-1756313)

Themes: Perception: Speech Perception and Audiovisual Integration, Perception: Auditory
Method: Functional Imaging

Back