My Account

Poster C88, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

Leveraging megastudy data in neuropsychological assessment of reading

J. Vivian Dickens1, Sarah F. Snider1, Rhonda B. Friedman1, Peter E. Turkeltaub1,2;1Georgetown University Medical Center, 2MedStar National Rehabilitation Hospital, Washington, DC

INTRODUCTION: Modern cognitive neuropsychology was borne out of seminal studies of patients with alexia, an acquired disorder of reading. Lexical dimensions that define types of alexic reading include letter length (short vs. long), spelling-sound regularity (regular vs. irregular spelling-sound correspondences), imageability (poor vs. rich mental imagery), and lexicality (words vs. pseudowords). Traditionally, deficits along these dimensions are identified separately through administration of carefully matched words that differ along a single dimension. Many modern neuroimaging and lesion studies of alexia make use of partially normed lists from the PALPA (Kay et al., 1996), or otherwise use tests idiosyncratic to a lab. Well-characterized assessments of alexia that leverage big data and updated measures of frequency, regularity, and imageability are notably absent. METHODS/RESULTS: We constructed a corpus of 200 monosyllabic English words matched orthogonally on SUBTLEX-US frequency (low/high), regularity (regular-consistent/irregular-inconsistent), and imageability (low/high). Frequency was measured in frequency per million (≤10 = low, >10 = high). Imageability (Cortese & Fugett, 2004; Coltheart, 1981) ranged from 1-7 (≤4 = low, >4 = high). Regular words both adhered to typical spelling-sound patterns and had a spelling-sound body consistency of 1 (Jared, 2002), while irregular words both violated typical spelling-sound patterns and had a spelling-sound body consistency <1. Low/high frequency words were matched on letter length, consistency, imageability, and articulatory complexity. Regular/irregular words were matched on letter length, frequency, imageability, and phoneme onset. Low/high imageability words were matched on letter length, frequency, consistency, and articulatory complexity. We derived naming latency (ms) and accuracy norms from the English Lexicon Project (http://elexicon.wustl.edu; 815 healthy adults) against which patient performance can be compared. Replication of benchmark effects in normal reading aloud make the corpus ideal for assessing alexia: 1) increased letter length relates to longer response latencies (r = .30, p < .0001); 2) low frequency words are read slower than high frequency words (t(198) = 6.02, p < .0001); 3) low frequency irregular words are read slower than low frequency regular words (t(98)= -6.73, p < .0001); 4) low frequency, low imageability irregular words are read slower than low frequency, high imageability irregular words (t(48) = -2.72, p = .009). In addition to item-level norms, we constructed a pseudorandomized list for testing in which each word type follows and precedes each other type an equal number of times, which minimizes order effects on overall performance while permitting examination of order effects. For assessing lexicality effects, we created 20 regularly spelled pseudowords, 20 orthographically unique pseudowords, and 20 pseudohomophones matched to each other and to a subset of 20 real words on length/bigram frequency. CONCLUSION: We present a new corpus tailored to assessing alexia that leverages large-scale normative data. We are currently extending these norms in healthy older controls and left hemisphere stroke survivors to include pseudoword naming latencies and accuracies, which are not available via the ELP, as well as lexical decision norms. These norms will be made publicly available to enable more targeted and replicable studies of the neurobiology of phonological and lexical-semantic reading processes.

Themes: Reading, Disorders: Acquired
Method: Behavioral

Back