Poster D68, Friday, August 17, 4:45 – 6:30 pm, Room 2000AB

A simulation-based approach to statistical power with ERPs

Chia-Wen Lo1, Jonathan Brennan1;1University of Michigan

Replicability in Event-related potential (ERP) studies have drawn increased attention recently (Boudewyn et al. 2017; Cohen 2017; Luck and Gaspelin 2017; Thigpen et al. 2017). ERPs are a spatial-temporal matrix that reflects multiple parameters (e.g. the mean amplitude over time, the latency and duration for a given component, topographical distribution, etc.). Thus, how to ensure that an effect “replicates” across different studies is quite difficult. Further, though most literature reported the f-value or t-value along with their p-value, only 40% of papers reported effect sizes, 56% reported mean values, and 47% reported some estimate of variance (Larson and Carbine, 2017). Rarely reporting such information impedes sample size calculation needed for conducting power analyses (Guo et al. 2014). To our knowledge, few studies examine how large is the effect size that can be reliably detected with standard ERP analyses (e.g. Boudewyn et al. 2017). The goal of the current study is to provide a way to assess statistical power across a range of effect sizes, using the P600 component as a case study. Existing toolboxes, like Besa Simulator, simulate ERPs from scratch by making assumptions about the source model and noise model which may not reflect actual data. We use actual raw single-trial data as the bases for our simulations, allowing for greater fidelity between simulated outcomes, and actual experimental outcomes. In this data-based approach, the simulation is conducted in four stages: (i) single-trial data from an experiment, already preprocessed, are randomly divided into partitions that reflect conditions for each participant, (ii) a stochastic effect E is drawn from a Gaussian distribution and added to each trial in one partition (iii) the data are averaged and a group analysis is conducted as it would be for a typical experiment, and (iv) steps (i-iii) are repeated, yielding a distribution of statistical outcomes where the “true” effect E is known. The distribution of E is parameterized in terms of amplitude, latency, duration, and topography. These can be estimated from the existing literature, or outcomes across a range of values can be compared. We demonstrate the method with data from an N = 43 study designed to test for P600 effects in wh-questions dependencies in Mandarin (Lo and Brennan, 2017). We evaluate the power to detect an effect specific to one target condition compared to two control conditions. Prior literature testing P600 effects in wh-questions indicate an effect of between 1 and 2 μV (e.g. Kaan et al., 2000; Phillips et al., 2005). In the current study, we test a range of effect sizes, ranging from 0.5 to 2 μV while keeping topography and latency fixed (central posterior electrodes, 500-700 ms). Results show power to detect E = 0.5 is 0.09, E = 1.0 is 0.21, E = 1.5 is 0.75 and E = 2 is 0.95. The current study shows that we can quantify the effect sizes in EEG data by a simulation-based approach.

Topic Area: Computational Approaches