Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

(How well) do you see what she’s saying? Inter-individual variability and correlates of the audiovisual speech benefit in behaviour and MEG

Poster D111 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Jacqueline von Seth1, Máté Aller1, Matt Davis1; 1University of Cambridge

It is well-established that seeing the face of a speaker can substantially enhance speech perception, especially in noisy environments. However, this audiovisual benefit is highly variable across individuals and measurement indices (Grant & Seitz, 1998; Tye-Murray et al., 2016). Previous studies have demonstrated neural tracking of acoustic features during silent lip-reading (Suess et al., 2022) and probed differences in speech tracking between unisensory and audiovisual conditions (Aller et al., 2022). Yet, inter-individual differences in neural speech tracking for audiovisual speech and its behavioural relevance remains poorly understood. Here we present a planned study designed to explicitly compare enhanced audiovisual benefit using temporal response functions (TRF) based on targeted recruitment of exceptional speechreaders and trial-level measures of intelligibility. We also present re-analyses of existing MEG data (N=14) and discuss preliminary results of an ongoing, large-scale behavioural study designed to a) quantify the distribution of the audiovisual benefit for acoustically degraded speech in the general, healthy population, b) probe cognitive correlates of an enhanced benefit and c) validate our intelligibility-matched audiovisual benefit measure. Rather than comparing changes in intelligibility due to added visual speech, we measure the relative intelligibility of matched audiovisual (AV) and auditory-only (AO) speech. In our behavioural study, participants report words-in-sentences, words in isolation and complete a forced-choice phoneme identification task in AO, AV and visual-only (VO) conditions. Acoustic clarity is manipulated using a noise-vocoding procedure to create two levels of degradation matched to achieve approximately 50% accuracy in both AO and AV conditions, based on the mixing proportion of unintelligible 1-channel and intelligible 16-channel vocoded speech (Zoefel et al., 2020). We assess the test-retest reliability of our audiovisual benefit measure across experimental sessions, its distribution and correlations between benefit measures at different levels of linguistic structure (phonemes, words, sentences). We also test predictors of enhanced audiovisual benefit, including level-specific measures of lipreading ability (VO), domain-general cognitive and linguistic skills (non-verbal and verbal IQ) as well as speech-in-noise perception thresholds and hearing experience using linear mixed-effects modelling (LME). The results of our behavioural study will inform the recruitment strategy for our planned MEG study powered to explicitly compare neural indices of audiovisual benefit in low and high benefit groups using TRF modelling of visual, acoustic and linguistic features. Among the list of planned analyses are: a) neural encoding of facial motion estimated using partial canonical correlation analysis (CCA) of facial landmark timecourses (Pederson et al., 2022) during silent lipreading and b) neural indices of audiovisual integration, comparing encoding of acoustic and linguistic features in intelligibility-matched AO and AV conditions, at the MEG sensor- and source-level. These will be illustrated in a re-analysis of our existing MEG dataset, alongside a discussion of predictions for the planned study. Group-level comparisons of individuals with higher and lower audiovisual benefit will allow us to elucidate neural responses relevant to behaviour and improve our understanding of inter-individual variability in the audiovisual benefit for speech.

Topic Areas: Multisensory or Sensorimotor Integration, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News