Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Learning to map between discrete structure in speech sounds and motor actions

Poster A43 in Poster Session A, Tuesday, October 24, 10:15 am - 12:00 pm CEST, Espace Vieux-Port

Peter Donhauser1, Yue Sun1, David Poeppel1,2; 1Ernst Strüngmann Institute, 2New York University

Listeners arguably transform acoustic speech signals into discrete phonological representations and can use these for lexical access or repetition. While different representational formats (phonemes, syllables, distinctive features) might be employed at different computational stages of speech processing, the degree to which they are accessible to other computations might differ. For example, when learning to write, a specific representational format of speech (phonemes, for some writing systems) has to be made accessible to a new output domain. Here we developed an experimental paradigm that aimed to test the accessibility of speech representations, specifically the distinctive feature level, using an auditory-motor learning task. Both speech sounds and motor acts, although continuous at the surface level, can be described by discrete features in a given context. For example, the onset of the syllable /ba/ can be uniquely described by the features {labial, plosive, voiced} within the sound inventory of German, and a button press with the left index finger could be uniquely described by the features {left, index} among the response options in an experimental task. When participants have to learn a new mapping between sets of syllables and motor acts, the mapping should be easier to learn if features of both sets map onto each other. Eight participants learned auditory-motor associations between 1) four syllables whose onsets differed in manner and place of articulation: plosives (labial /ba/ or coronal /da/) and nasals (labial /ma/ or coronal /na/) and 2) four buttons that were pressed with index or middle finger of the left or right hand. Participants learned multiple different mappings with either a consistent or an inconsistent mapping between syllables and buttons, counterbalanced across participants. We used an auditory feedback procedure in which participants heard syllables from two speakers in rhythmic alternation: the target syllable is uttered by the first speaker and participants press a button to control the syllable that will be uttered by the second speaker. Participants have to learn the correct auditory-motor mapping to match the target syllable. Reaction times averaged within different mappings exhibit effects for order and consistency: while participants’ reaction times were influenced by the temporal order of mappings, they were faster for consistent mappings, as predicted. We then split errors into double-feature (wrong hand and finger) and single-feature (wrong hand or finger) errors and found an interaction between consistency and error type: while different error types were equally probable for inconsistent mappings, single feature errors were more frequent for consistent mappings, suggesting that participants did learn direct mappings at the feature level when this was possible. We thus found preliminary evidence that listeners can access speech representations at the distinctive feature level and map them to an arbitrary but analogously structured motor output. We argue that the problem of representational formats and their accessibility is important not only for language science but also applications, whether old or new (reading, writing, brain computer interfaces).

Topic Areas: Phonology,

SNL Account Login

Forgot Password?
Create an Account

News