Using frequency selectivity to examine category-informative dimension-selective attention
Sahil Luthra1, Chisom O. Obasih1, Adam T. Tierney2, Frederic Dick2,3, Lori L. Holt1; 1Carnegie Mellon University, 2Birkeck, University of London, 3University College London
In everyday listening, individuals selectively attend to acoustic dimensions (e.g., frequency or duration) within a complex sound and ignore other simultaneous dimensions. This dimension-selective aspect of auditory attention has been less studied than classic “cocktail party” scenarios that involve spatially selective attention. Speech sound categories are defined over multiple dimensions that vary in informativeness; thus, speech perception may demand selective attention to diagnostic dimensions. However, we do not yet understand how listeners learn to selectively attend to informative acoustic dimensions during category learning, how selective attention impacts cortical representations of relevant dimensions, and whether selective attention involves suppression of irrelevant dimensions as well as enhancement of relevant dimensions. In this ongoing fMRI study (target N=30), we are examining these questions using novel non-speech auditory categories requiring reliance on information in high or low spectral bands. Prior to scanning, participants complete five days of stimulus-response-feedback training during which they learn four novel nonspeech categories to criterion (Obasih et al., in preparation). Exemplars are composed of three concatenated high-bandpass-filtered hums and three simultaneous low-bandpass-filtered hums; these hums are nonspeech pitch contours, derived from multiple talker productions of Mandarin words varying in lexical tone. Within a frequency band, concatenated hums can be drawn either from a single lexical tone category or multiple categories. Two nonspeech categories are defined by category-consistent hums in the high-frequency band and inconsistent (between-category) hums in the low band. The other two categories have category-consistent hums in the low band and category-inconsistent hums in the high band. Thus, category learning requires reliance on – and perhaps selective attention to – category-diagnostic acoustic patterns within high or low frequency bands. Control trials involve categorization across an orthogonal dimension, stimulus amplitude (‘big’ Category A, ‘small’ Category A). During a single post-training MRI session, listeners categorize sounds in a 2AFC task where categories are differentiated by information in either high or low spectral bands, or on relative amplitude. We combine this with tonotopic mapping across auditory cortex and “attention-o-tonotopic” mapping driven by overt endogenous attention to high and low frequency bands. We will examine how dimension-selective attention driven by the demands of categorization may impact cortical activation across informative and uninformative dimensions and how selective attention effects may differ when demands are driven implicitly by task relevance (categorization) versus by directed attention (‘listen high’). This work will illuminate the cortical mechanisms supporting dimension-based auditory selective attention; through a comparison with our control condition (stimulus amplitude judgments), it will also allow us to assay a putative role for suppression when listeners deploy selective attention for categorization. It will provide a theoretical bridge from effects of explicitly directed attention (i.e., “listen high”) to effects of selective attention that may emerge over the course of auditory category learning, as listeners learn which dimensions are informative for categorization. Finally, the study links human studies of auditory attention to speech and non-human animal studies of frequency-selective auditory attention with non-speech stimuli.
Topic Areas: Perception: Auditory, Methods