Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Examining semantic processing in the face of a spectrally degraded speech signal

Poster C75 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Christina Blomquist1, Rochelle Newman1, Jan Edwards1, Ellen Lau1; 1University of Maryland

Cochlear implants (CIs) have transformed the lives of deaf listeners. While these devices enable individuals to hear via electrical stimulation of the auditory nerve, the signal is spectrally degraded, making speech perception and language comprehension more difficult. Specifically, phonemic contrasts relying on perception of spectral acoustic features are less reliably discriminated by listeners with CIs. Here we describe an in-progress ERP study of spectrally degraded speech processing in healthy adults, designed to test the hypothesis that increased phonemic ambiguity in turn drives expanded semantic competition during comprehension. Both adults and children with CIs demonstrate decreased efficiency and accuracy in understanding spoken language. Eye-tracking studies have revealed that listeners with CIs are slower to access the meaning of spoken words, and less efficient at using this information to facilitate processing of upcoming words. The few studies using ERPs to investigate speech recognition by listeners with CIs have also demonstrated delayed semantic processing (e.g., increased N400 latencies). The current study investigates a potential mechanism underlying these delays: expanded cohort competition. Increased ambiguity of the initial phoneme of a word (e.g., shave-save sound more similar when speech is spectrally degraded) may lead to a larger cohort of words competing during word recognition, and cascading activation could lead to semantic associates of these competitors being considered as well. Expanded competition at the phonological and semantic level could explain delays in both single-word processing and higher-level semantic integration. We measure the N400 elicited in a spoken word-picture mismatch task in which the words have initial phonemes that are harder-to-perceive in a spectrally degraded signal (i.e., /t/ vs. /k/ and /s/ vs. /ʃ/). Participants are listeners with typical hearing (TH) perceiving speech that is degraded in a similar manner to the cochlear implant signal, compared to a control group listening to clear speech. ERPs will be recorded while participants perceive pictures (e.g., beard) preceded by spoken primes that either fully match the target (Match prime; beard), are semantically related to the target (Semantic prime; shave), or rhyme with the semantically-related word (Phonosemantic prime; save). In addition, two types of phonemic contrasts will be used to distinguish the semantic and phonosemantic prime: contrasts that are more ambiguous when perceived via a CI (place; shave vs. save) and those that are easily perceived (multiple cues; shave vs. grave). Data collection is ongoing. In preliminary analyses from 26 participants we computed average FZ voltage across the critical time window: 250 to 550 ms after picture onset. Difference measures for the Match (beard), Semantic (shave), and Phonosemantic (save) effects were computed by comparing each condition to the easy-to-perceive phonosemantic (grave) prime. Initial results show that compared to the control group, which demonstrates match, semantic, and phonosemantic effects, the degraded speech group shows a smaller match effect and an absent semantic and phonosemantic effect. If these results hold, they may suggest that delays in lexical processing aren’t driven by expanded cohort competition, but rather perhaps due to slower activation of the perceived word that may actually limit semantic access and competition.

Topic Areas: Signed Language and Gesture, Meaning: Discourse and Pragmatics

SNL Account Login

Forgot Password?
Create an Account

News