Slide Slam H6
Processing of degraded vocal signals in primary progressive aphasia and Alzheimer's disease
Jessica Jiang1, Mai-Carmen Requena-Komuro1, Elia Benhamou1, Jeremy C. S. Johnson1, Harri Sivasathiaseelan1, Lucy Russell1, Annabel Nelson1, Jonathan Rohrer1, Jason D. Warren1, Chris J. D. Hardy1; 1Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London
The ability to understand speech and paralinguistic signals is crucial for everyday communication, yet generally we are required to process such signals under suboptimal listening conditions, such as background noise. This presents the auditory brain with an intensely demanding computational problem, which it normally solves automatically and efficiently. However, the processing of degraded speech and other vocal signals is potentially vulnerable to neurodegenerative diseases that target the distributed neural circuits mediating vocal signal decoding. Here we addressed this issue in a cohort of patients representing all major variant syndrome of primary progressive aphasia (PPA), in relation to patients with Alzheimer's disease (AD) and healthy age-matched controls. We used noise-vocoding to reduce the amount of spectral information in vocal signals: in two separate experiments, we assessed the impact of this manipulation on the recognition of spoken words and emotional prosody (three-digit numbers spoken in one of the three universal emotions), respectively. The nonfluent and logopenic PPA and AD groups showed a raised threshold (channel number) for recognition of vocoded spoken words relative to healthy controls. All dementia syndromic groups showed reduced recognition of emotional prosody after vocoding compared with natural speech, however this perceptual 'cost' was most pronounced in the logopenic PPA group and least marked in the nonfluent PPA group. Our findings suggest that processing of degraded vocal signals may differentiate dementia syndromes based on distinct pathophysiological mechanisms.