Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Time-course of processing for syntactic properties in spoken word recognition

Poster A24 in Poster Session A, Thursday, October 6, 10:15 am - 12:00 pm EDT, Millennium Hall
This poster is part of the Sandbox Series.

Alexa S. Gonzalez1, McCall E. Sarrett1, Joseph C. Toscano1; 1Villanova University

A fundamental issue in spoken language comprehension involves understanding the interaction of linguistic representations across different levels of organization (e.g., phonological, lexical, syntactic, and semantic). In particular, there is debate about when different levels are accessed during spoken word recognition. Under serial processing models, comprehension is sequential (e.g., phonological processing precedes lexical access, which precedes access to higher-level syntactic and semantic representations). In contrast, under parallel processing models, simultaneous activation of representations occurs at multiple levels. The current study investigates this issue by isolating neural responses to one type of higher-order information—syntactic class of a word—from low-level acoustic and phonological responses. This allows us to identify the earliest time during which listeners distinguish syntactic class information in speech. Using a component-independent event-related potential (ERP) design, we collected EEG responses to spoken words varying in syntactic class. Stimuli consisted of synthesized disyllabic nouns and adjectives. The adjective and noun lists were matched for word frequency, phonological neighborhood density, and biphone probability. A cross-splicing procedure was used to cancel out low-level acoustic differences, such that the same set of initial and final syllables occurred in both lists. The point of disambiguation (POD; i.e., the earliest point at which the incoming acoustic information would indicate whether the word is a noun or adjective) occurred at the syllable boundary. EEG data were recorded from 32 electrodes placed at International 10-20 System sites. Electrode impedances were less than 10 kΩ, EEG data were recorded at a sampling rate of 500 Hz, and data were referenced online to the left mastoid and re-referenced offline to the average of the two mastoids. On each trial, participants heard a spoken word and then saw a visually presented word 500±150 ms later. Participants performed a two-alternative forced choice task, where they determined whether the auditory and visual words shared the same syntactic class (match vs. mismatch). There were 20 total items (10 nouns, 10 adjectives), and auditory words were repeated 19 times (once with each other word from the set of items as the visual comparison) for a total of 380 experimental trials. We predicted that overlap in the time-course of processing among different levels of linguistic organization would produce effects of syntactic class on ERP responses within 200 ms after the POD, during which listeners are still processing acoustic properties of the words. To evaluate this, we used an analysis approach to decode syntactic class. This analysis showed that syntactic class is decodable at approximately 200 ms post-POD, supporting the prediction that different levels of representation would have overlapping time-courses. Overall, these results support a parallel, interactive processing model of spoken word recognition, in which higher-level information—such as syntactic class—is accessed while acoustic analysis is still occurring.

Topic Areas: Writing and Spelling, Prosody