You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.


Poster Slam Session A
Thursday, August 16, 10:00 – 10:15 am, Room 2000C, Chair: Matt Davis

How are visual words represented? Insights from EEG-based image reconstruction during reading

Shouyu Ling1, Andy C.H. Lee1,2, Blair C. Armstrong1,3, Adrian Nestor1;1University of Toronto Scarborough, 2Rotman Research Institute, 3BCBL. Basque Center on Cognition, Brain, and Language

Investigations into the neural basis of orthographic processing have made considerable progress by exploiting the spatial structure of functional magnetic resonance imaging (fMRI) data. For instance, fMRI patterns in high-level visual cortex have been recently used to decode the visually word forms presented to participants , providing insights into “what” and “where” orthographic information is stored. However, such investigations tell us relatively little about “what” and “when” specific properties of a word are represented over time. Here, we capitalize on the spatiotemporal structure of electroencephalography (EEG) data to examine the neural signature of visual word processing, its representational content as well as its spatial and temporal profile. Specifically, we investigated whether EEG patterns can serve for decoding and reconstructing visually presented words in neurotypical young adults. To this end, data were collected from 14 participants who performed a one-back repetition detection task on 80 three-letter high-frequency nouns with a consonant-vowel-consonant structure. EEG pattern analyses were conducted across time-domain and frequency-domain features for 64 recording channels for the purpose of word decoding (i.e., classifying which word was presented) and reconstructing the visual image. The decoded EEG patterns and the recovered visual images of the words were estimated using a neural-based image reconstruction algorithm. Our results show that: (i) word classification accuracy was well above chance across participants (range: 69-78% versus 50% chance level); (ii) word decoding and image reconstruction was achieved at a similar level of accuracy, providing an important means of visualizing an answer to the “what” question; (iii) relatedly, letters in all positions were reconstructed above chance, though we also note a marked advantage for the nucleus (i.e., the vowel); (iv) regarding the “when” question, the time course of classification/reconstruction accuracy peaks in the proximity of the N170 component, and (v) the relevant structure of the EEG signal across occipitotemporal electrodes was correlated with processing in the left visual word form area as suggested by additional fMRI data. Further, we found that reconstruction results are well explained by objective orthographic similarity and image similarity across word stimuli. Last, we noted individual differences in the representational orthographic structure across participants. These differences may relate to different reading strategies. Thus, our results establish the feasibility of using EEG signals to support decoding and image reconstruction as applied to orthographic stimuli. Moreover, they shed light on the temporal dynamics of orthographic processing and they take steps toward accounting for the representational structure of EEG signals in terms of their fMRI counterpart. The parallels between our findings and analogous work in face processing also illustrate the domain generality of our approach. More generally, the current findings provide a new window into visual word recognition in terms of the underlying features, the spatiotemporal dynamics, and the neurocomputational principles governing visual recognition.

Topic Area: Perception: Orthographic and Other Visual Processes

Poster A21

Back to Poster Slam Sessions