My Account

Poster A20, Tuesday, August 20, 2019, 10:15 am – 12:00 pm, Restaurant Hall

Predictability of semantic representations during pre-utterance planning

Maryam Honari-Jahromi1, Brea Chouinard2, Esti Blanco-Elorrieta3,4, Liina Pylkkänen3,4, Alona Fyshe2;1University of Victoria, 2University of Alberta, 3University of New York, 4University of New York, Abu Dhabi

Decoding techniques have uncovered the neural semantic representations of words and phrases during comprehension, but not during pre-utterance planning. Such techniques use machine learning to predict stimulus properties from brain recordings, using pre-established word-vectors. Here, we used decoding to investigate neural representations of color words and nouns, before the utterance, and to compare semantic representations of isolated words to words in phrases. Methods: We used 208-channel MEG (magnetoencephalography) data from twenty right-handed, native English speakers naming coloured pictures on a coloured background. Condition was determined by instruction: Isolation, “Say object colour” (or object); Phrase, “Say object name and colour” (red bag on green background yields “Red bag”); List, “Say background color then object name” (white bag on red background yields “Red bag”). Stimuli appeared for 1500ms or until the participant’s response. We applied existing machine learning methods to the 0-700ms of MEG data after stimulus presentation, using Skip-gram vectors compiled from the Google news-articles dataset (vocabulary size: 692,000). The resultant 300-dimensional vectors and the MEG-data were combined to track neural representations across time (0-700ms) and condition. We used a sliding 100ms time-window, every 5 ms, to obtain decoding accuracies, then computed temporal generalization matrices (TGMs). TGMs determine how well the learned weights from a certain time or condition map onto other times or conditions, thus providing information about decodability across time. Training and testing on the same time-window provides information about decoding accuracy, while training and testing on different time windows (e.g., train ~100ms, and test ~200, 350, or 600ms), provides information about the decodability of representations across time. TGMs can also compare across conditions. Results: Naming accuracy was >97%, with no latency differences between list and phrase conditions. Our overall decoding was significantly above chance (p<.05) for adjectives (71.03%, 64.41%) and nouns (62.97%, 70.5%) in the list and phrase conditions respectively. Adjective TGMs: Adjective findings were relatively consistent, with nearly all TGMs identifying robust mental representations immediately prior to utterance (i.e., ~450-700ms) and no maintained or repeated representations. Uniquely, the list TGM indicated dissimilar early (~100ms) versus late (~600ms) representations, possibly differentiating an early visual representation, from a later semantic or motor one. There were no notable across-condition results. Noun TGMs: Generally, noun representations were decodable earlier than adjectives (i.e., ~100ms) and were more protracted (i.e., ~150-450ms), but were not decodable just before the utterance. Unlike adjectives or the list condition, isolated nouns representations were decodable at different times in the phrase condition, indicating a delayed repetition of the noun representation. Conclusion: We found ample information in MEG data for decoding during pre-utterance planning. We found robust mental representations of adjectives immediately prior to their utterance, but no maintenance of or repeated representations. In contrast, noun representations were prolonged in the phrase condition, compared to the list condition and adjectives, and reappeared at later times, indicating delayed repetition of the noun representation. Future research could investigate regions-of-interest instead of whole brain analysis, and evaluate adjectives that meaningfully alter the noun (i.e., rotten tomato, broken kettle).

Themes: Computational Approaches, Meaning: Combinatorial Semantics
Method: Computational Modeling

Back