Poster A41, Thursday, August 16, 10:15 am – 12:00 pm, Room 2000AB

Effects of stimulus modality on semantic processing

Joshua Troche1, Jamie Reilly2;1University of Central Florida, 2Temple University

Semantic categorization demands efficient coordination between verbal and nonverbal domains. We hypothesize that access to these feature attributes is to some extent moderated by stimulus modality. We specifically predict that objects presented in pictorial form will bias the engagement of qualitatively different semantic features than the same items presented as words. Thus, orthographic presentation will likely access lexical/encyclopedic knowledge, whereas picture presentation will first engage perceptual similarity. Hence, a stimulus-driven property has the potential to drive differences in semantic categorization. Participants (N=14) were native English speakers (9f; 5m) with an average age of 21.3 years. Tetrads were presented on a screen while gaze was measured by a RED SMI eyetracker. Tetrads came in two versions, a picture and equivalent word version. Each of the tetrads had a theme amongst three of the items based on encyclopedic knowledge of a high school level. Participants had to intuit a theme and determine which item did not belong. After each trial, participants explained why they chose that item. Tetrad order was randomized as well as order of presentation (word-picture or picture-word) Participants would return a week later to complete the version they had yet to complete. Responses for why the item did not belong were classified one of four ways: encyclopedic, perceptual, affective, I don’t know. Workers completed the classification of the response on Mechanical Turk. Each response was classified ten times. The Mode classification was considered the consensus and workers had an excellent level of reliability (ICC=.86). A chi-square test was run to determine if differences existed in the response type based on the presentation type (i.e., picture vs word). The chi-square test was significant overall (X2(2)=76.002;p<.001) with encyclopedic facts being the most common response in both word (382) and picture (292). The proportion of responses, however, was different across the word (encyclopedic:71.3%, perceptual:18.5%, and affective:10.3%) and picture task (encyclopedic:49.4%, perceptual:42.5%, and affective:8.1%) with perceptual responses being almost equal to encyclopedic responses in the picture task. A linear mixed effects model was performed to determine if any differences existed in gaze measures. The three gaze outcome measures were fixation duration of the target, fixations of the target, and revisits to target. For each of the outcome measures, the best fit model included fixed effects of presentation and response type and random effects of participant and item. The picture version had significantly longer fixation durations (t(6)=8.06;p<.001) and significantly more fixations (t(6)=7.095;p<.001) and revisits (t(6)=6.31;p<.001). There were significant main effects for fixation duration (t(6)=2.102;p<.05), fixations (t(6)=4.856;p<.001) and revisits (t(6)=8.362;p<.001). Pairwise comparisons revealed that perceptual processing led to more fixations and revisits when compared to encyclopedic (fixations: t=6.833;p<.001 revisits: t=15.35;p<.001) and affective processing (fixations: t=2.783;p<.001 revisits: t=8.06;p<.001). No significant pairwise comparisons existed for fixation duration. These findings suggest that stimulus presentation does bias the manner in which items are processed. It also indicates that when processed perceptually, more information is extracted from the task than when items are processed lexically. Overall, the findings suggest a semantic system that is flexible to task change.

Topic Area: Meaning: Lexical Semantics

Back