Slide Sessions

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Slide Session B

Friday, October 25, 3:30 - 4:30 pm, Great Hall 1

Chair: Jonathan Peelle, Northeastern University

Talk 1: Relative encoding of speech intensity in the human temporal cortex

Ilina Bhaya-Grossman1,2, Yulia Oganian3, Emily Grabowski2, Edward Chang1; 1University of California, Berkeley, 2University of California, San Francisco, 3University of Tübingen

Lexical stress, or the emphasis placed on syllables within words, critically facilitates word recognition and comprehension processes. For instance, it enables listeners to distinguish between the noun “a present” (PRE-sent) and the verb “to present” (pre-SENT). In English, lexical stress is prominently signaled by relative speech intensity, with the stressed syllable exhibiting the greatest intensity relative to other syllables in the word. Prior work has shown that the human speech cortex on the superior temporal gyrus (STG) encodes speech intensity as a series of discrete acoustic landmarks marking moments of peak intensity change (peakRate). Building on this finding, a key question arises: Is there a neural encoding of relative intensity in the STG that supports the perception of lexical stress? To address this question, we performed intracranial recording (n=9 ECoG patients) while English speaking participants performed two experiments. In Experiment 1, participants performed a forced choice task, identifying whether the first or second syllable was stressed in a set of synthesized two syllable pseudo-words (e.g. hu-ka, ma-lu). The intensity of the first syllable in each pseudo-word varied while the intensity of the second syllable was fixed, allowing us to experimentally test whether neural responses to the second syllable depended on the intensity of the first. We found that a subset of cortical sites on the human STG encoded relative intensity, that is, these sites showed activation in response to the second syllable only when its intensity was greater than that of the first syllable. Critically, we found that cortical sites that encoded relative intensity were distinct from those that encoded peakRate. Neither population encoded which syllable participants perceived as stressed when they were presented with ambiguous pseudo-words, where both syllables had identical intensity. In Experiment 2, we used a passive listening paradigm to extend our findings to a naturalistic speech stimulus. Our results indicate that relative and absolute intensity of speech is encoded in two distinct neural populations on the STG and further, that these populations do not encode stress percepts in cases where the intensity cue to lexical stress is removed. Our results reveal the multiple, distinct neural representations that work in concert give rise to lexical stress perception.

Talk 2: Cueing Improves Expressive Emotional Aprosodia in Acute Right Hemisphere Stroke: Investigating Neural and Acoustic Characteristics

Shannon M. Sheppard1, Gabriel Cler1, Sona Patel2, Ji Sook Ahn2, Lynsey Keator3, Isidora Diaz-Carr4, Argye E. Hillis4-5, Alexandra Zezinka Durfee6; 1University of Washington, Seattle, WA, 2Seton Hall University, 3University of Delaware, 4Johns Hopkins University School of Medicine, 5Johns Hopkins University, 6Towson University

Introduction: Right hemisphere (RH) stroke frequently impacts expressive emotional prosody (pitch, rate, loudness, rhythm of speech), resulting in expressive aprosodia. Expressive aprosodia is associated with negative outcomes including reduced social networks (Hewetson et al., 2021), but we still know little about the neural correlates and effective treatments. Expressive aprosodia can arise from impaired motor planning and implementation, or from lack of awareness of the acoustic characteristics that convey specific emotions (e.g., sadness is conveyed with a quiet volume and low pitch). We aimed to 1) identify the specific acoustic features of five emotions (happy, sad, angry, afraid, surprised) that differed between healthy controls and individuals with expressive aprosodia that has not resulted from motor deficits, 2) determine whether aprosodia would improve when providing specific cues (e.g., happy = high pitch, fast rate) for each emotion, and 3) investigate neural correlates. Methods: Patient group: 21 participants with acute RH damage following ischemic stroke and expressive aprosodia were enrolled and tested within five days of hospital admission. Aprosodia diagnosis was confirmed by speech-language pathologists. Control group: 25 healthy age-matched controls. Prosody Testing and Analysis: Speech was recorded while participants completed two tasks: 1) read aloud 20 semantically neutral sentences with a specified emotion (e.g., Happy: “He is going home today.”) without acoustic cues, and 2) read aloud the same sentences with acoustic cues provided (e.g., Happy (Fast Rate, High Pitch). Automated routines in Praat were used to extract acoustic characteristics relevant to each emotion (e.g., fundamental frequency variation, duration) for each sentence. Mixed effects linear models were used to evaluate whether acoustic characteristics of each emotion differed between the control and patient groups, and to determine if cueing improved impaired characteristics of speech. Neuroimaging and Analysis: Acute neuroimaging, including diffusion-weighted Imaging (DWI) was acquired. Areas of ischemia were identified and traced on DWI images using MRICron (Rorden & Brett, 2000). Lesion volume and proportion of damaged tissue to regions of interest (ROIs) in the JHU atlas were calculated. Mixed effects linear models evaluated whether changes to specific acoustic features were predicted by damage to right hemisphere ROIs. Results: The patient group differed from controls only on emotions with positive valence (happy and surprised). They had significantly lower pitch (surprised: p = 0.009; happy: p < 0.001) and slower rate (surprised: p = 0.002, happy: p = 0.002). Cueing helped improve speech rate for happy (p = 0.04) and surprised sentences (p < 0.001), and improved pitch in happy (p < 0.001), but not surprised sentences. Lesion mapping analyses revealed damage to the right putamen, external capsule, fronto-occipital fasciculus, and posterior superior temporal gyrus were implicated in expressive aprosodia. Conclusion: Expressive aprosodia primarily impacts the expression of emotions with positive valence, but providing acoustic cues can improve pitch and speech rate. Damage to both right hemisphere cortical and subcortical structures were implicated in expressive aprosodia. These findings have clinical implications for the development of expressive aprosodia treatments, and contribute to neural and cognitive models of prosody expression.

Talk 3: Relative Brain Age as a Biomarker for Language Function in Acute Aphasia

Sigfus Kristinsson1, John Absher2, Sarah Goncher2, Roger Newman-Norlund1, Natalie Hetherington1, Alex Teghipco1, Chris Rorden1, Leonardo Bonilha1, Julius Fridriksson1; 1University of South Carolina, SC, USA, 2Prisma Health-Upstate, SC, USA

Introduction Although factors such as lesion location, age, and stroke severity account for variability in language function, long-term prognostication remains problematic in aphasia.1-3 Recently, we found that brain age–a neuroimaging-derived measure of brain atrophy–predicted language function at stroke onset and long-term recovery in a small sample of stroke survivors.4 Here, we examined the extent to which brain age explains variability in language performance in a larger, non-selective sample of acute stroke patients. Methods The current study relies on archival data from 1,794 individuals admitted to the Prisma Health-Upstate facility in Greenville, SC (F/M, 889/901; age, 67.815.1y). Participants underwent routine clinical neuroimaging (T1-weighted) and their language performance was assessed by an on-call clinician. After excluding participants with structural brain pathophysiology, MRI data were preprocessed using established procedures and we estimated the brain age of 1,027 participants using the publicly available BrainAgeR analysis pipeline.5,6 To overcome the effects of biased brain age estimates in younger and older individuals, we calculated Relative Brain Age (RBA) as follows6: [RBA = Estimated Brain Age – Expected Brain Age (Estimated Brain Age | Chronological Age)] Estimated Brain Age represents the predicted brain age based on BrainAgeR, whereas Expected Brain Age was calculated by regressing Estimated Brain Age on Chronological Age. Thus, a positive RBA reflects an “older looking brain” and negative RBA a “younger looking brain”, given chronological age. Logistic regression models were constructed to examine the association between RBA and presence/absence of aphasia, and regression models to investigate the effect of RBA on the following behavioral outcomes: NIHSS Language (N=478), WAB Auditory Comprehension (N=52), WAB Yes/No Questions (N=87), WAB Naming (N=87), and WAB Repetition (N=290). Models were adjusted for chronological age, lesion size, and affected hemisphere. Results Our primary analyses revealed a significant interaction between RBA and lesion size (=.001, p<.01) for the prediction of aphasia presence, suggesting that a negative RBA (‘younger looking brain’) is associated with absence of aphasia in case of relatively small lesions. RBA was not associated with performance on any of the continuous language outcomes. In an effort to scrutinize the relationship between RBA and lesion size, we added a binary term reflecting brain resilience (positive/negative RBA). We observed a significant interaction between brain resilience and lesion size for the NIHSS Language Score (=.002, p<.05), WAB Yes/No (=-.002, p<.001), and WAB Naming (=-.003, p<.01), suggesting that preserved brain resilience is predictive of better language performance in smaller lesions only. We similarly observed a significant interaction between brain resilience and lesion in the right hemisphere for WAB Repetition (=-.001, p<.05), and brain resilience and lesion in the left hemisphere for WAB Naming (=-.001, p<.05) and WAB Repetition (=-.001, p<.001). Discussion Our findings suggest brain age explains variability in language performance not accounted for by lesion characteristics and age. In particular, brain resilience emerged as a prominent predictor of language performance. Although a fine-grained analysis is underway, we are encouraged by the positive findings thus far, and contend that our findings promise to inform prognostication procedures in post-stroke aphasia.

Talk 4: A hierarchical ensemble approach to predicting response to phonological versus semantic naming intervention in aphasia using multimodal data

Dirk Den Ouden1, Alex Teghipco1, Sigfus Kristinsson1, Chris Rorden1, Grant Walker, Julius Fridriksson1, Leonardo Bonilha1; 1University of South Carolina, 2University of California, Irvine

Introduction Treatment selection for persons with aphasia (PWA) is aided by improved prediction of treatment response in general, but especially to specific types of intervention. Several studies have attempted to predict response to ‘phonological’ interventions for lexical production, focused on word-form representations, versus ‘semantic’ interventions, focused on meaning representations, based on biographical, behavioral or neurological variables in isolation. Here, we tailored treatment-response predictions to individuals by integrating multimodal information while considering multivariate relationships of various complexities. Methods Out of 93 PWA who received both phonological and semantic interventions, 34% exhibited a clinically-meaningful improvement on the Philadelphia Naming Task ( >9/175 points), with 9 responding to phonological and 23 to semantic treatment. Response was predicted in a nested leave-one-out cross-validation scheme using a set of 345 baseline biographical, behavioral, and neuroimaging variables. Behavioral variables included latent constructs of impaired domains based on our prior modeling efforts (Walker et al., 2018). Neuroimaging variables spanned measures of lesion load, task-based BOLD-response, cerebral blood flow, fractional anisotropy, medial diffusivity, and functional and structural connectivity. Unilateral variables were expressed in proportion to bilateral counterparts. Given that the factors determining treatment response may differ from those determining response to phonological versus semantic treatment, we adopted a flexible hierarchical modeling approach. We first trained a binary classifier to predict general treatment response. Then, a second binary classifier was trained on ‘responders’ to adjudicate between the two interventions. Both classifiers were ensembles of decision trees, boosted using RUSBoost. Model tuning included identification of the most reliably predictive features through stability selection, enhanced by combining multiple complementary algorithms for forming the feature ensemble (Teghipco et al., forthcoming). Results General treatment response was predicted with 77% balanced accuracy and 0.72 AUC (p<0.0001). In the correctly predicted responders, a second model achieved 85% balanced accuracy and 0.79 AUC (p<0.0001). The combined model had an overall balanced accuracy of 72% and AUC of 0.77 (p<0.0001). Different patterns of feature weights drove model performance, and feature importance did not correlate between the two models (p=0.4). Nevertheless, some features were influential across both models, highlighting the complex interaction of latent ability estimates and error types, consistency across multiple baseline picture-naming sessions, and whole brain CBF. Non-responders were more strongly predicted by high ventral functional connectivity, inconsistency of picture-naming errors, lesion-load characteristics, low performance on semantic judgment, reduced CBF, and higher stroke severity. Semantic responders were more strongly predicted by high perilesional temporal-lobe BOLD-response, while phonological responders were more strongly predicted by high fractional anisotropy across the brain, but especially in the dorsal stream. No biographical variables were among the top predictors for treatment response. Conclusions The hierarchical multimodal approach we present here applies machine learning to predict whether PWA will respond to impairment-based naming treatment, and whether ‘responders’ show greater effects of phonologically-focused versus semantically-focused intervention. It yields improved success particularly in the prediction of response to phonological versus semantic intervention. These results can aid in the selection of impairment-based treatment for PWA with naming difficulties.

 

SNL Account Login


Forgot Password?
Create an Account

News

Abstract Submissions extended through June 10

Meeting Registration is Open

Make Your Hotel Reservations Now

2024 Membership is Open

Please see Dates & Deadlines for important upcoming dates.