You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.

Poster Slam Session C
Friday, August 17, 10:15 – 10:30 am, Room 2000C, Chair: Mairéad MacSweeney

Worse than useless: traditional ERP baseline correction reduces power through self-contradiction

Phillip M. Alday1;1Max Planck Institute for Psycholinguistics

Baseline correction plays an important role in past and current methodological debates in ERP research (e.g. the Tanner v. Maess debate in Journal of Neuroscience Methods), serving as a potential alternative to strong high-pass filtering. However, the very assumptions that underlie traditional baseline also undermine it, making it statistically unnecessary and undesirable. In particular, the assumption that the electrophysiological activity of the baseline interval does not differ systematically between conditions implies by definition that the baseline interval is essentially a by-channel noisy reference. The noise from the baseline interval is then projected into the target interval, thereby reducing power. Moreover, as a reference, the baseline interval can bias topographies, especially if the no-systematic-difference assumption is violated. This reference nonetheless serves to address non-experimental recording factors (electrode drift, differences in environmental electrical noise), but there are better methods for controlling for theses environmental issues. Instead of assuming a fixed baseline correction, whether trial-by-trial or at the level of single-subject averages, we can instead include the baseline interval as a statistical predictor, similar to other GLM-based deconvolution approaches (e.g. removal of eye-artifacts, Dimigen et al. 2011; rERP, Smith & Kutas 2014). The baseline interval can then interact with, i.e. allow its influence to be weighted by topographical and experimental factors. This controls for topographical biases, addresses electrode drift in block designs, does not require the no-systematic-difference assumption, and allows the data to determine how much baseline correction is actually needed. Additionally, both full traditional baseline correction and no baseline correction are included as special cases. The lack of the no-systematic-difference assumption also allows for this method to be applied more naturalistic settings, which have recently begun to gain ground in M/EEG research (cf. Alday et al. 2017, Broderick et al. 2017, Brodbeck et al. 2018). In addition to this theoretical argument, we show the effectiveness of this method by reanalysis of previous ERP studies on language. We find that the empirically determined baseline correction is often much less than the traditional correction. Using semi-parametric simulations from mixed-effects models fit to these data, we further show that the trade-off in power between additional model complexity and the noisiness of the dependent variable is worth the improved fit to the data.

Topic Area: Methods

Poster C61

Back to Poster Slam Sessions