You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.


Poster C20, Friday, August 17, 10:30 am – 12:15 pm, Room 2000AB

Abstract rules versus abstract representation: A neural decoding analysis of localized representation in an artificial grammar task

Adriana Schoenhaut1, Anne Marie Crinnion2, Ahlfors Seppo1,3, Gow David1,3,4;1Massachusetts General Hospital, 2Harvard University, 3Athinoula A. Martinos Center for Biomedical Imaging, 4Salem State University

Introduction: What kind of representations support learner’s ability to generalize in artificial grammar paradigms? Marcus et al. (1999) argued that infants’ ability to generalize simple syllable repetition rules (e.g. ABB “wo fe fe” versus ABA “wo fe wo”) to tokens outside of the training set reflects reliance on abstract “algebraic” rules. Another possibility is that this ability may reflect simple associative mechanisms acting on abstract representations of hierarchical structure such as those posited in autosegmental accounts of harmony, assimilation or Semitic morphological processes. To examine this hypothesis, we applied neural decoding techniques to determine whether abstract syllable repetition patterns could be discriminated in ROI-based spatiotemporal activation data in a phonologically and acoustically balanced training set. Methods: MEG/EEG data were collected simultaneously during the task. Adult subjects were exposed to a stream of tri-syllable nonsense utterances (e.g. “tah dih dih”) corresponding to one of three syllable repetition grammars (AAB, ABB, or ABA). They were then presented with individual utterances of the same structure featuring different syllables, and told to choose whether said utterances corresponded to the same artificial language as the exposure speech stream. All syllables occurred an equal number of times in each position across the three rule conditions. Results: Subjects were able to perform the task with high accuracy. MR-constrained minimum norm reconstructions of MEG/EEG data were created for each trial, and automatically parcellated into ROIs including regions associated with wordform representation (supramarginal gyrus) and rule learning and application (left inferior frontal gyrus). We extracted activation timeseries from these ROIs, and submitted them to support vector machine (SVM) analyses for classification by rule. Analyses showed better classification performance by wordform areas than by rule application areas. Conclusion: These results suggest that wordform representations may include abstract syllable patterning information that could support either rule-based or form-based generalization mechanisms.

Topic Area: Grammar: Syntax

Back