Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

The role of degraded auditory input on predictive audiovisual language processing: the case of cochlear implant users

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster C80 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Simone Gastaldon1,2, Noemi Bonfiglio1, Francesco Vespignani1,3, Flavia Gheller1,3, Davide Brotto4, Davide Bottari5, Patrizia Trevisi4,2,3, Alessandro Martini4,2,3, Francesca Peressotti1,2,3; 1Dipartimento di Psicologia dello Sviluppo e della Socializzazione (DPSS), University of Padova, Italy, 2Padova Neuroscience Center (PNC), University of Padova, Italy, 3Centro Interdipartimentale di Ricerca "I-APPROVE – International Auditory Processing Project in Venice”, University of Padova, Italy, 4Dipartimento di Neuroscienze (DNS), UOC Otorinolaringoiatria, University of Padova, Italy, 5Scuola IMT Alti Studi Lucca, Italy

Seeing our interlocutor’s mouth helps us understand speech when the signal is suboptimal (e.g., in a noisy environment). Similar challenges are faced by deaf people with cochlear implant (CI users), whose sensory input is less detailed compared to that of normal hearing (NH) people. How does auditory and visual input integration interact with lexical predictability? Is this interaction different in CI users relative to NH people? In an exploratory electroencephalographic study, CI users and NH people were presented with audio-video recordings of an Italian speaker uttering sentences. Before the final word, an 800 ms silent gap was introduced. The predictability of the final word was determined by the preceding sentential constraint (high vs low). Additionally, mouth visibility (visible vs covered) was manipulated, but only during sentence frame presentation (i.e., during the processing of the linguistic input that allows to generate lexical predictions). In preliminary ERP analyses time-locked at gap onset on 17 CI users (age mean = 22.35, sd = 10.8; 7 males, 9 males, 1 non-binary), we found an N400 effect at centro-parietal electrodes: the low constraint condition was always more negative than the high constraint condition. However, in face of large inter-individual variability, when looking at CI users implanted early in life (age of implantation 1-3 years of age, n = 8), this effect appeared consistent only when the mouth was covered during the sentence frame, while the patterns was reversed when looking at pre-verbal deaf participants implanted later in life (age of implantation 5-43 years of age, n = 9), i.e., the effect emerges when the mouth was visible. These preliminary results suggest that not seeing the mouth of the speaker during a sentential context may differently affect lexical prediction in CI user, according to when in life they were implanted. Planned subsequent analyses include analyzing alpha-beta oscillatory activity in the silent gap prior to the presentation of the target word – arguably while predictions are being generated – and at gamma activity after target onset, to observe prediction error encoding and multisensory integration as a function of prediction. All these analyses will be carried out also for the NH control group, to further clarify what are the effects of developing a spoken language competence while having to rely on a poorer auditory input.

Topic Areas: Speech Perception, Multisensory or Sensorimotor Integration

SNL Account Login

Forgot Password?
Create an Account

News