Human grid electrode ECoG reveals that local neural activity in anterior temporal cortex expresses semantic information
Poster D19 in Poster Session D with Social Hour, Friday, October 7, 5:30 - 7:15 pm EDT, Millennium Hall
Saskia L. Frisby1, Ajay D. Halai1, Christopher R. Cox2, Alex Clarke1, Akihiro Shimotake3, Takayuki Kikuchi3, Takeharu Kuneida3,4, Susumu Miyamoto3, Ryosuke Takahashi3, Riki Matsumoto3,5, Akio Ikeda3, Matthew A. Lambon Ralph1, Timothy T. Rogers6; 1University of Cambridge, 2Louisiana State University, 3Kyoto University, 4Ehime University, 5Kobe University, 6University of Wisconsin-Madison
Converging evidence from neuropsychology, positron emission tomography (PET), magnetoencephalography (MEG), and transcranial magnetic stimulation (TMS) supports the theory that the anterior temporal lobe functions as a semantic “hub”, binding modality-specific semantic information into generalisable transmodal and transtemporal representations of meaning (Lambon Ralph et al., 2017). Recent evidence from human grid electrode electrocorticography (ECoG) showed that stimulus animacy can be decoded from voltages recorded from the surface of the ventral anterior temporal lobe during a picture naming paradigm (Rogers et al., 2021). However, the raw voltages used in this work may obscure information present in different frequency bands, making it difficult to judge whether animacy is encoded by gamma- and/or high-gamma-band activity characteristic of local processing, lower-frequency activity characteristic of long-range transmission, or some combination of these. To disentangle these possibilities, we reanalysed ECoG data collected from ventral anterior temporal cortex in ten participants while they named line drawings of animals and inanimate objects. We conducted a time-frequency decomposition using complex Morlet wavelets, then used logistic regression with elastic net regularisation and 10-fold nested cross-validation to decode animacy from time-frequency power spectra. Comparing this approach to the results from decoding voltage, we found that classifiers trained on time-frequency power data performed as well as or better than those trained on voltage data. Large classifier weights were placed on power features from across the frequency spectrum, indicating that a wide range of frequency bands carry information about animacy. However, a large number of the selected features came from gamma or high-gamma bands, and classifiers trained only on these features performed as well or almost as well as those trained on a wider frequency spectrum in all but one participant. These results suggest that, during picture naming, information about animacy is represented in local neuronal activity (on the assumption that processing frequency is inversely proportional to the size of the underlying processing network). In several participants we also observed that, at some time points, decoding with power succeeds while decoding with voltage fails, while at other time points the reverse is true. This raises the possibility that voltage and time-frequency power may each independently encode animacy information. Implications of these results for theories about the neural basis of semantic representation will be considered.
Topic Areas: Meaning: Lexical Semantics, Computational Approaches