My Account

Poster E8, Thursday, August 22, 2019, 3:45 – 5:30 pm, Restaurant Hall

Hearing Parents’ Use Multimodal Cues to Establish Joint Attention as a Function of Children’s Hearing Status

Heather Bortfeld1, Allison Gabouer1;1University of California, Merced

The capacity to engage in joint visual attention lays the groundwork for language learning. Initially, researchers used gaze-following to measure engagement in joint attention (Scaife & Brunner, 1975). Since then, other visual cues have been tracked (e.g., point-following (Mundy, Hogan, & Doehring, 1996); hand-following (Yu & Smith, 2017)) to assess children’s ability to follow an adult’s lead. In all cases, auditory information is assumed primary to establishing joint attention. Parent-child dyads in which parents are hearing and children are deaf provide an interesting context in which to examine how parents accommodate their children when the dominant communication modality is not shared. Our previous work demonstrated that hearing parents of deaf children incorporate multimodal cues when engaging in joint attention with their child, and do so to a greater degree than hearing parents of hearing children (Depowski, Abaya, Oghalai, & Bortfeld, 2015). However, it is unclear how the parents and children enter the engaged state. Here we examine parents’ patterns of modality use when initiating joint attention with their deaf children (all of whom were candidates for cochlear implantation), and compare these to those produced by hearing parents engaging with their hearing children. We focused on multimodal communication patterns produced by hearing parents while they engaged in free play with their children. Participants were nine severely to profoundly deaf children (females = 3) aged 22 months (M = 22.2, SD = 9.4) and their hearing parents (females = 9). Nine typically developing, age-matched children (females = 5) aged 24 months (M = 24.2, SD = 11.3) and their hearing parents (females = 5) were included as a comparison group. The videos were coded for initial instances of parent-initiated joint attention using ELAN, a software tool for the creation of complex annotations on video. ELAN allows for multimodal, second-by-second behavioral coding of videos of parent-child interactions. Parent-initiated bids for joint attention – both successful and failed – were coded and quantified, as was the cue or combination of cues that parents used (e.g., using their hand or a toy to tap or touch the child, deliberately waving their hand or toy within the child’s visual field, and vocalizing) for both types of bids. We tallied the total raw number of bid occurrences (both successful and failed) across dyad types. These were used to calculate proportional data for successful and failed bids for joint attention. First, we found no differences by dyad type in overall attempts to establish joint attention, nor in successful or failed bids for joint attention. Overall, hearing parents of hearing children provided cues in the auditory domain alone (unimodally) or paired with a visual cue. Likewise, the most used, successful cue in the hearing parent-deaf child dyads was a multimodal one that included both auditory and visual information. Overall, our finding of multimodal (auditory-visual) cuing being used more frequently than unimodal cuing shows that an important component of what goes into establishing joint attention appears to be missing from currently accepted measures of joint attention.

Themes: Development, Disorders: Developmental
Method: Behavioral

Back