My Account

Poster E65, Thursday, August 22, 2019, 3:45 – 5:30 pm, Restaurant Hall

Evaluation of an auditory and visual fMRI language paradigm with reliability measures and dynamic causal modelling

Karsten Specht1,2,3, Kathrine Midgaard1, Erik Rødland1;1Department of Biological and Medical Psychology, University of Bergen, 2Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, 3Department of Education, UiT/The Arctic University of Norway, Tromsø

The present study aimed to develop and evaluate a language paradigm for both speech perception and production that could be administered either visually or acoustically, such that both patients with limited sight or hearing could be examined. A secondary aim of the study was to examine whether the underlying network is independent of the sensory modality of the initial stimuli. The original paradigm was used by Berntsen et al. (Berntsen et al., 2006) and rests on the concept, extracted from the former TV-show Jeopardy. The stimuli are simple sentences, presented either visually or aurally, and require the subject not only to understand the sentence but also to formulate a question that has a semantic relation to the content of the presented sentence. Thereby, this paradigm does not only test the perception of the stimuli, but also sentence and semantic processing, and covert speech production. Based on current network models, it was hypothesized that, besides differences in the primary sensory processing, both activation of, and effective connective within the dorsal and ventral stream of the speech and language network (Hickok & Poeppel, 2007; Specht, 2013, 2014) will be identical and independent from the sensory modality. It was further expected that this paradigm would show high cross-modal reliability. Method Twenty-one healthy, right-handed participants (10 men / 11 women), were recruited for this fMRI study. The participants were aged 21 to 50 years, with a mean age of 25 years. The study was conducted on a 3T GE MR scanner. A visual and auditory version of the paradigm has been developed. There were separated fMRI runs for each paradigm. The structure of the paradigm was identical for both sensory modalities and consisted of eight active blocks, each containing six trials, and eight blocks with a sensory control condition (either, e.g. #### # ## or reversed speech, respectively). Irrespective of the sensory modality, subjects had to formulate an appropriate response to every trial covertly. Data were analysed with a general linear model and dynamic causal modelling (DCM) (Friston, Harrison, & Penny, 2003). The reliability of brain activations and network connections were explored with intraclass-correlation coefficients (ICC) (Specht, Willmes, Shah, & Jäncke, 2003). Results & Discussion The results demonstrate that independent from the sensory modality, this paradigm reliably activated the same brain networks, namely the dorsal and ventral stream for speech processing (Hickok & Poeppel, 2007). Further, the ICC analysis revealed that there was high reliability of brain activation across sensory modalities. This was supported by the fact, that the DCM analysis showed that the underlying network structure and connectivity was the same between sensory modalities, although the strength of the effective connectivity appears to vary with the sensory modality. In conclusion, the explored paradigm reliably activated the most central parts of the speech and language network, independently whether the stimuli were administered acoustically or visually, and is, therefore, suitable as a clinical paradigm, since both patients with visual or auditory disabilities can be examined.

Themes: Perception: Speech Perception and Audiovisual Integration, Computational Approaches
Method: Functional Imaging

Back