You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster B62, Wednesday, November 8, 3:00 – 4:15 pm, Harborview and Loch Raven Ballrooms

Using Representations from Artificial Neural Network Models of Reading to Reveal Neural Activation Patterns for Different Reading Computations

William Graves1;1Rutgers University - Newark

Despite decades of research into the brain basis of reading, there is still fundamental disagreement about how cognitive models of reading map onto neural function. Neurally inspired computational models offer precise predictions, yet until recently, established methods were lacking for testing if these predictions corresponded to brain activation patterns. We used standard methods to train an artificial neural network model to map from visual word form input (orthography) to word sound output (phonology) across a single intermediate (hidden) layer. During functional magnetic resonance imaging, participants read aloud 465 words. Representational similarly analysis was used to test for cortices in the left hemisphere where the modeled similarity structure across the words corresponded to their neural similarity structure. Orthographic representations corresponded to activation patterns in early visual cortex, ventral occipito-temporal (vOT), anterior temporal lobe (ATL), and inferior frontal cortex (IFC). Hidden unit representations corresponded to activation patterns in IFC, vOT, and areas of ATL partially distinct from orthography. Phonological representations corresponded to activation in the ATL and IFC also distinct from orthography and hidden units. These results provide direct computational evidence for existing accounts of the neural basis of reading, and provide novel evidence for diversity of reading-related function in the ATL.

Topic Area: Perception: Orthographic and Other Visual Processes

Back to Poster Schedule