Slide Slam

< Slide Slam Sessions

Slide Slam C2 Sandbox Series

The effects of speaker’s eye gaze on infants’ speech processing and word segmentation

Slide Slam Session C, Tuesday, October 5, 2021, 12:30 - 3:00 pm PDT Log In to set Timezone

Melis Çetinçelik1, Caroline F. Rowland1,2, Tineke M. Snijders1,2; 1Max Planck Institute for Psycholinguistics, 2Donders Institute for Brain, Cognition and Behaviour, Radboud University

In face-to-face interactions, speech is inherently multimodal, accompanied by a variety of information such as the speaker’s facial expressions, lip movements, and eye gaze. Such cues may facilitate speech processing, and may be especially important for infants, for whom speech processing and word segmentation are challenging tasks. Among these multimodal cues, eye gaze presents itself as a ubiquitous, powerful social cue that facilitates children’s learning in various domains of cognitive development, including language development. The ability to establish eye contact with a communication partner and follow their gaze allows infants to orient and attend to the relevant information in the naturally noisy environment. As a result, measures of early gaze following and responses to joint attention correlate positively with receptive and expressive vocabulary (Brooks & Meltzoff, 2008). However, the effects of the speaker’s eye gaze on other aspects of language development are less clear. This is an important omission given the potential role of eye gaze as an ostensive cue that optimizes information transfer between the child and the adult (Csibra & Gergely, 2009). Eye gaze might have a general enhancement effect, also facilitating learning in different aspects of language, such as speech perception and word segmentation, by increasing infants’ attention to the speech signal. In adult studies, attention to speech has been found to enhance cortical speech tracking (Lesenfants & Francart, 2020; Rimmele et al., 2015). In the current study, we investigated infants’ cortical tracking of continuous speech and word segmentation in ostensive and non-ostensive conditions. Typically-developing 10-month-old infants watched videos of an adult Dutch speaker telling stories using infant-directed speech, addressing the infant either with direct or averted eye gaze. The audio-visual stories consisted of four sentences, with one word being repeated in every sentence for each story. Each video was followed by audio-only isolated words (familiar/novel). 32-channel EEG was recorded throughout the experiment. Our aim was to determine (1) if infants’ cortical tracking of speech during the audio-visual stories, measured by speech-brain coherence (at the syllable and stressed syllable rates), was associated with word recognition performance (measured by the ERP word familiarity effect at the audio-only isolated words in the 250-500 ms and 600-800 ms time-windows); and (2) whether the ostensiveness of the adult’s speech, signalled by their eye gaze direction (direct vs. averted) facilitated cortical tracking of speech and word recognition. We will compare speech brain coherence during the direct gaze vs. averted gaze conditions, and investigate whether the ERP word familiarity effect is larger when the speaker addressed the infant with direct gaze compared to averted gaze during the familiarisation phase. Cluster randomization statistics will be used for the analysis. We aim to include 48 infants in the final dataset, and have currently tested 43 infants. Preliminary results will be presented at the conference.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News