Poster D38, Friday, August 17, 4:45 – 6:30 pm, Room 2000AB

Emojis and Prediction in Sentence Contexts

Ben Weissman1, Darren Tanner1;1University of Illinois at Urbana-Champaign

Emojis are ideograms that are becoming ubiquitous in modern digital communication; however, there is little work on how mixed word/emoji utterances are processed. Some work has investigated face emojis (e.g., Miller et al. 2016, 2017; Tigwell & Flatla, 2017; Weisman & Tanner, 2018), but little work has investigated non-face emojis, which can serve different communicative purposes (Riordan, 2017a,b). Here we report findings from two ERP experiments meant to investigate how semantic content from emojis is predicted and integrated during sentence comprehension, and whether brain responses elicited by emojis are qualitatively similar to those elicited by words. Prior work on language has implicated three relevant ERP components: the N400, the late frontal positivity (LFP), and the P600. Unexpected words in constraining sentence contexts are widely known to elicit larger N400 amplitudes, which in some cases can be directly tied to prediction of word forms and their associated semantic features (Federmeier & Kutas, 1999; Kutas & Hillyard, 1984). Plausible but unpredicted words have been shown to elicit enhanced LFPs following enhanced N400 effects, whereas implausible unpredicted words show larger N400 amplitudes, followed by posterior P600 effects (Brothers et al., 2015; DeLong et al., 2014; Federmeier et al., 2007). Previous ERP work has investigated how the brain processes sentences with line drawing completions (Federmeier & Kutas, 2002; Ganis et al., 1996), showing that larger N400s are elicited by unexpected drawings, similar to N400s elicited by words. However, in the intervening years the rise of emojis in common digital discourse has both given rise to a set of familiar ideograms that were not previously available (i.e., emojis) and made picture sentence completions more common for populations who frequently communicate digitally using emojis. The present research therefore investigates emoji-related semantic processing in young adults who frequently use emojis. Stimuli were strongly constraining ending in an expected or plausible but unexpected completion, and weakly constraining sentences ending in a plausible or implausible completion. Completions were either emojis (Experiment 1) or their word counterparts (Experiment 2). iOS emojis were normed for high name agreement, which matched their word counterparts. Experiment 1 (N = 20) results showed larger amplitude N400s when the emoji was either unexpected or implausible for high and low constraint sentences, respectively, though the implausibility effect showed an anterior focus (similar to prior reports of N300 effects; Federmeier & Kutas, 2002; McPherson & Holcomb, 1999). The expectancy N400 effect in high-constraint sentences showed a very early onset (~150ms). Unexpected emojis showed an LFP, and implausible words P600 effects, relative to their expected/plausible emoji counterparts, respectively. Experiment 2 is currently underway and will allowed detailed comparison between these emoji-related ERPs and those elicited by words in the same contexts. Our results suggest that comprehenders rapidly access semantics from emojis and that these symbols are easily incorporated into a multimodal utterance. Moreover, our findings indicate qualitative similarity between prediction violations involving contextually relevant emojis and words, indicated by the LFP in high-constraint sentences – a finding, which is to our knowledge novel.

Topic Area: Meaning: Lexical Semantics

Back