My Account

Poster C78, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

A new framework for studying audiovisual speech integration: Partial Information Decomposition into unique, redundant and synergistic interactions

Hyojin Park1, Robin A.A. Ince2, Joachim Gross2,3;1University of Birmingham, 2University of Glasgow, 3University of Muenster

Network processing of complex naturalistic stimuli requires moving beyond the mass-univariate analysis of simple statistical contrasts to use methods that allow us to directly quantify representational interactions, both between different brain regions as well as stimulus features or modalities. In our recent work, we quantified representational interactions in MEG activity between dynamic auditory and visual speech features, using an information theoretic approach called the Partial Information Decomposition (PID). We showed that both redundant and synergistic interactions between auditory and visual speech streams are found in the brain, but in different areas and with different relationships to attention and behaviour. In the current study, we aimed to investigate how these two interactions aforementioned as well as unique information can be characterized spatiotemporally. We computed redundant, synergistic and unique information between dynamic auditory and visual sensory signals about the ongoing MEG activity localised to pre-defined anatomical regions (AAL; Automated Anatomical Labeling). We first found that behaviourally relevant synergistic information has shown differentially in primary sensory areas and higher-order areas when participants paid more attention to matching audiovisual speech while ignoring an interfering auditory speech. In primary visual and auditory areas, synergistic interaction depends on low-frequency rhythmic fluctuation as a function of auditory delay. However, in inferior frontal and precentral areas, synergistic interaction depends on low-frequency fluctuation as a function of visual delay. Second, the involvement of rhythmic fluctuation of auditory unique information as a function of auditory delay is shown in the right primary sensory areas (auditory, visual). This was critical for speech comprehension. The current method allows us to investigate multi-sensory integration in terms of explicitly quantified representational interactions: either overlap or common information content (redundancy), or superlinear interactive predictive power (synergy), as well as the unique information provided by each modality feature alone. We hope this framework can provide a more detailed view of cross-modal stimulus representation and hence give insight into the cortical computations which process and combine signals from different modalities.

Themes: Perception: Speech Perception and Audiovisual Integration, Computational Approaches
Method: Electrophysiology (MEG/EEG/ECOG)

Back