Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

From Sounds to Words: Evidence for Lexical Representations Distinct from Nonwords

Poster B55 in Poster Session B and Reception, Thursday, October 6, 6:30 - 8:30 pm EDT, Millennium Hall
Also presenting in Poster Slam B, Thursday, October 6, 6:15 - 6:30 pm EDT, Regency Ballroom

David O Sorensen1, Enes Avcu2, Skyla Lynch2, Seppo Ahlfors2,3,4, David Gow2,3,4,5; 1Harvard Medical School Division of Medial Sciences, 2Massachusetts General Hospital/Harvard Medical School, 3Athinoula A Martinos Center for Biomedical Imaging, 4Harvard-MIT Division of Health Sciences and Technology, 5Salem State University

Recent advances in the application of intracranial neurophysiology and neural decoding to map sensitivity to features in the superior temporal gyri have created a solid grounding for our understanding of the earliest stages of speech categorization. In contrast, there is still a lack of consensus on questions as basic as how words are represented and in what way this word-level representation influences downstream processing. Evidence from behavioral, modeling and neuroimaging experiments demonstrate that nonwords produce similar effects as words across a wide range of measurements, suggesting similar underlying representations. To identify brain regions that are selectively sensitive to words but not nonwords, we performed a neural decoding study using a unique transfer design that allowed direct comparisons between training based on words and nonwords while controlling for phonological similarity. We recorded MEG and EEG from 20 native English speakers as they listened to presentations from six distinct phonological neighborhoods and made judgments about whether the stimuli were English words. Neighborhoods included both word and nonword phonological neighbors of a set of CVC hub words. All participants achieved greater than 80% accuracy on the task. We generated a set of 39 functionally unique ROIs based on MEG activity in the period from 100-500 ms post-stimulus onset. For each ROI, we trained support vector machine classifiers to discriminate between presentations from each neighborhood in a pairwise, transfer-learning design. We hypothesize that classifiers trained to distinguish between phonological neighbors of hub words (e.g. poog and taid) should also be able to distinguish those words (pig and toad) without additional training. Neural decoding results were integrated into Granger causality-based effective connectivity analyses to examine how regions that achieved significant classification performance interacted with other regions. A widespread group of regions showed early (~100-350 ms) decoding performance during stimulus presentation. This group included several ROIs in the temporal lobe, as well as ROIs in articulatory regions of the pre- and post-central gyri. These early decoding windows were not sensitive to lexicality (as they occurred before the presentation was completed and thus prior to any judgment as to lexical presence could be made). A later decoding regime (~400-500 ms) showed sensitivity to lexicality in a subset of temporal lobe ROIs, including the left pMTG and intermediate regions of the superior, middle, and inferior temporal gyri in the right hemisphere. Granger causality analyses show that the word-evoked activity in these regions significantly influences the decoding accuracy within this subset of regions compared to nonword-evoked activity. As these patterns of brain activity both decode and effectuate downstream processing in networked temporal lobe regions, they can be described as representations which are sensitive to both phonological content and lexicality. Thus, while nonwords do activate language network resources, they do so via representations that are substantively different than words even at relatively early stages of the language network.

Topic Areas: Speech Perception, Phonology and Phonological Working Memory