Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Neural bases of the facial imitation of auditory smiles in EEG and SEEG

Poster C40 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Camille Des Lauriers1,2,3, Jean-Julien Aucouturier5,6,7,8, Martine Gavaret1,2,3,4, Llorens Anaïs5,6,7,8; 1Institut Psychiatrique et Neurosciences de Paris, 2INSERM, 3Université Paris Cité, 4GHU Paris Psychiatrie et Neurosciences, 5Université de Franche-Comté, 6SUPMICROTECH, 7CNRS, 8institut FEMTO-ST

Spoken interaction is based on verbal and non verbal information. Prosody plays an important role during communication by providing information on the other person's intention that can be explicitly or implicitly processed. For instance it has been shown that listeners mimic smiles perceived in the speakers’ voice, even when they don’t see their face, and even when they don’t consciously recognize the voice as smiling (Arias et al., Current Biology 2018). This behavior suggests a complex form of cognitive processing involving the phonological recognition of the spectral signature of smiled speech, the activation of sensorimotor networks linking this signature with oro-facial motor activity and, potentially the involvement of social-cognitive and emotional circuits linked to social communication. However, the exact neural bases of unconscious facial imitation of smiled speech remain largely unknown. In this planned study, we will collect behavioral data, facial EMG activity (zygomatic and corrugator muscles) and EEG activity recorded from electrodes placed on healthy controls’ scalp (EEG, N = 20) or directly implanted in drug-resistant epileptic patients’s brain (SEEG, N = 20) while they listen and rate smiling speech. Forty sentences recorded by 4 speakers (2M/2F) with a smiling or non-smiling tone will be used. The EEG study will consist of the passive listening of 3 blocks of 80 sentences (40 smile and 40 non-smile) each, followed by an active listening task where participants judge whether the sentences are smiling or not by pressing a button 2 seconds after the outset of the sentences. Only the active listening task will be used in SEEG. From previous literature, we expect to find more mimicry in active than passive tasks. In passive tasks, we expect to find 10-15% false-alarm (non-smiling stimuli wrongly detected as smiling) and miss trials (when smile is not detected). Zygomatic EMG is expected to be activated during hits & misses, but not false alarms and correct rejections; conversely, corrugator EMG should be deactivated during hits & false alarms, but not misses and correct rejections. In the EEG data, we will focus first on the comparison of the passive versus active task, then on the comparison of miss versus hit trials within the active task. We will mostly use time-frequency analyzes during the sentences and ERPs locked to the decision. The SEEG paradigm will help us define the brain regions involved in the implicit detection of smile with a focus on temporal lobes for the auditory processes, motor and premotor areas for the preparatory response (mimicry), and frontal lobes for the overt decision. The EEG collection will take place during Summer 2023, the first SEEG acquisition early Summer, with a rhythm of once a month, and the first data analysis are planned to be available in the Fall.

Topic Areas: Prosody, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News