Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Decoding intended words in an individual with aphasia using stereo-electroencephalography

Poster B5 in Poster Session B, Tuesday, October 24, 3:30 - 5:15 pm CEST, Espace Vieux-Port

Tessy M Thomas1,2, Oscar Woolnough1,2, Nitin Tandon1,2; 1McGovern Medical School, University of Texas Health Science Center at Houston, 2Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston

Aphasia is a devastating speech disorder that affects more than one million people in the US alone. While some patients show improvements over time with speech therapy, many have long-term impairments. Augmentative and alternative communication devices provide compensatory strategies, but these devices are limited and cumbersome to use daily. Alternatively, recent research aims to develop speech brain-computer interfaces (speech-BCIs) that can help restore speech function by detecting the patient’s intended speech based on the neural activity of the language network. Speech-BCIs have been demonstrated to predict speech from healthy individuals and from non-verbal patients with no cortical damage. However, this technology has yet to be investigated with aphasic patients, who have sustained damage to some portion of the cortical language network. We investigated speech decoding for a patient diagnosed with non-fluent aphasia after a traumatic brain injury, using neural recordings from language areas that remained intact. Using stereo-electroencephalographic (sEEG) electrodes, placed for the localization of epilepsy, we recorded broadband gamma activity (70-150Hz) from frontal and parietal cortex as the patient named pictures of eight unique common objects, with each picture repeated about 50 times in a randomized order. In each trial, the patient was shown the picture for 2s, followed by a 1s delay before being probed for a response. The patient correctly articulated 25% of responses with a further 38% containing phonemic errors, 6% resulting in an incorrect response, and 31% resulting in no response. We used a non-linear classification model (support vector machine with gaussian kernel) to decode the patient’s response from trials where a word was produced, with and without phonemic errors. The responses were classified with an accuracy of 43.1% (12.5% chance accuracy) during the articulation period, with the most accurate word classified at 70%. Prior to articulation, the response was classified with 16.2% accuracy during the delay period, and with 21.9% accuracy once the patient was probed for a response. We then extended the analysis to evaluate whether the model could also decode the patient’s intended response from trials where no word was produced aloud. Since no response was given, the model was used to predict the name of the picture during the delay period with an accuracy of 21.4%, and 18.0% once the patient was probed for a response. These results demonstrate that the remaining intact language areas can provide sufficient information for a speech-BCI to decode intended words from neural population activity in an aphasic patient, despite errors that are common with non-fluent speech. Furthermore, due to high variability in aphasia-related cortical damage patterns, sEEG enables distributed electrode coverage to access as many widespread intact areas as possible with minimal risk. As such, sEEG-based speech-BCIs hold much potential as an assistive device for aphasic patients to overcome difficulties in expressing their thoughts aloud.

Topic Areas: Language Production, Disorders: Acquired

SNL Account Login

Forgot Password?
Create an Account

News