You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster B50, Wednesday, November 8, 3:00 – 4:15 pm, Harborview and Loch Raven Ballrooms

Representations of amplitude modulations in auditory onsets, ramp tones, and speech in the human superior temporal gyrus

Yulia Oganian1,2, Edward Chang1,2;1Department of neurological surgery, University of California, San Francisco, 22Center for Integrative Neuroscience, University of California, San Francisco

Making sense of complex auditory inputs requires temporally precise parsing of single events out of the auditory stream. Auditory event onsets are typically marked by an increase in amplitude, and previous animal and human studies identified the dynamics of amplitude rise at onset as a central feature encoded throughout the subcortical auditory pathway. Amplitude modulations, however, also mark changes within ongoing sounds, e.g. in the speech envelope. Yet, it is unknown how the human auditory cortex differentiates between amplitude rises in silence and within an ongoing sound, and whether onset encoding can account for tracking of the speech amplitude envelope. To address this, we designed tone stimuli containing amplitude ramps, rising from silence (ramp-from-silence, RfS, condition), or from an amplitude baseline (ramp-from-baseline, RfB, condition). The sound intensity of ramps increased linearly to peak amplitude and returned to silence/baseline, with varying rates of amplitude change. We recorded local field potentials using intracranial multi-electrode arrays placed over the temporal lobes of six patients undergoing evaluation for epilepsy neurosurgery, as they passively listened to the tones. In both conditions, ramps elicited transient responses in the high gamma frequency range (HG, 70-150 Hz) in posterior (p) and middle (m) superior temporal gyrus (STG), with larger HG amplitudes for fast-rising ramps. We observed a striking double dissociation of response types: pSTG encoded the rate of amplitude change in the RfS condition only, whereas mSTG encoded the rate of amplitude change in the RfB condition only. Crucially, the rate of amplitude envelope modulation in continuous speech was also represented in mSTG, but not in pSTG. Our results reveal functionally and spatially distinct representations of sound onsets in silence and in background along STG. Moreover, our data suggest that speech amplitude tracking in mSTG may rely on the same neural mechanisms as the encoding of onsets in background.

Topic Area: Perception: Auditory

Back to Poster Schedule