Slide Slam H8
Electrophysiological correlates of spoken and sung word processing in chronic aphasia and healthy aging
Lilli Kimppa1, Teppo Särkämö1; 1University of Helsinki
Speech production impairment is a hallmark of stroke-induced aphasia. Singing, on the other hand, is often less affected. Stroke in the left perisylvian language area may also compromise spoken word comprehension, by disruption of the word memory circuits. Whether this is reflected in the neural processing of spoken words in chronic aphasia and in old age in general, is poorly understood. We presented disyllabic spoken and pitch-modulated words and pseudowords, carefully matched in their phonological and other acoustic properties, to patients with chronic aphasia and age-matched healthy elderly control participants. The fundamental frequency modulation of the first syllables mimicked sung input. EEG was recorded in a passive listening condition, where the focus of attention was directed on watching a silent movie in order to reduce effects related to task and attentional demands. ERPs for the first syllable were extracted in order to assess the impact of F0-modulation to obligatory responses for speech sounds. Further, ERPs time-locked to the second syllable, which importantly disambiguated the lexical from the meaningless items, were analysed to probe lexical activation in the two modalities and between the groups. The P1-N1 complex for the pitch-modulated syllable showed a clear effect of modality, with sung input producing larger amplitudes, indicating stronger neural suppression for the spoken items with homogeneous F0 in comparison to the sung syllables. Furthermore, controls elicited stronger frontocentrally prominent P1 compared to the patients with chronic aphasia. N1 amplitudes did not differ between the groups but the spoken items were generally stronger in the parietal sites. The P2 response showed partially differing dynamics to the P1-N1, with spoken syllables eliciting stronger frontocentral response than the sung ones, possibly indicating better neural discrimination and auditory encoding of the spoken syllables. Similar to the P1, the P2 was larger in the control compared to the aphasia group across modalities, referring to impaired speech sound encoding in aphasia. For the second syllables, the P1 was larger to sung compared to spoken items across groups. This effect was reversed in the N1, which was stronger for the spoken compared to sung items across groups. However, overall N1 was stronger in controls. The effect of greater amplitudes to spoken compared to sung input persisted at the N250 and N400 latencies. Interestingly, controls showed the expected N250 and N400 enhancement for spoken pseudowords over real words but no such difference was observed for the sung items. In contrast, the N250 was larger for pseudowords than real words only in the sung domain in aphasics, but this effect was reduced in the N400. These results indicate impaired automatic spoken word memory trace activation in chronic aphasia, but introducing pitch modulation in the speech prosody akin to singing seems to facilitate lexical access. Hence, using singing as a means for communication in chronic aphasia may have beneficial effects for comprehension and strengthening of lexical memory traces.