ʼһ

header-logo
GettyImages-650160957-hero

A Neural Map of Word Meaning

In this study, we asked which areas of the cerebral cortex represent information about the meanings of words. Previous neuroimaging studies had indicated that large portions of the temporal, parietal, and frontal lobes participate in processing language meaning, but it was unknown which regions actually encoded information about individual word meanings. Additionally, we also investigated whether the neural representations of word meaning in each of these areas contained information about phenomenological experience (i.e., related to different kinds of perceptual, emotional, and action-related experiences), as we and several other researchers have previously proposed, or whether they contained primarily categorical or distributional (i.e., word co-occurrence statistics) information.

Neural Map of Word Meaning | Brain viewWe used functional MRI along with a technique called “representational similarity analysis (RSA) searchlight mapping” to generate a high-resolution map of the brain regions in which word meaning information was activated when participants were silently reading individual words presented on the screen (a different word was presented every few seconds). We scanned 64 participants across two different experiments. The two experiments used different sets of words, for a total of 522 unique English nouns, including animals, food/plants, tools, vehicles, body parts, human traits, quantities, social events, verbal events, sound events, and negative events.

We found that word meaning information was represented in several high-level cortical areas (i.e., areas that are not closely connected to primary sensory or motor areas), including the classical “language areas” known as Broca’s area and Wernicke’s area. Interestingly, however, some regions not previously considered as important for language processing, including the intra-parietal sulcus, the precuneus, and the orbitofrontal cortex, were among those showing the strongest evidence for word meaning representation. Furthermore, while the anterior portion of the inferior frontal gyrus, known as pars orbitalis, has long been considered to be the most important area for semantic language processing in the frontal lobe, both experiments showed that a different portion, known as pars triangularis, encoded more semantic information. As expected, we found that both cerebral hemispheres participate in word meaning representation, although the left hemisphere was more prominently involved (in the vast majority of people, the left hemisphere is specialized for language processing while the right hemisphere specializes in processing visuospatial information).

Figure 1: Regions where the neural similarity structure corresponds to the experiential similarity structure of word meanings.

We also found that word meaning representations in all of these regions encode experiential information, that is, information derived from sensory, motor, and emotional experiences, even after controlling for other types of information such as semantic category and word co-occurrence statistics. The study also showed that these representations encode multimodal information, in that they combine information from multiple experiential features and sensory modalities. While some researchers have proposed that multimodal concept representations exist only in the anterior temporal lobes, this study shows that information originating from different unimodal sensory and motor areas is integrated at several locations throughout the brain.

Neural Map of Word Meaning | Hierarchical modelThe numerous functional areas that make up the human cerebral cortex can be roughly described as occupying different locations along a continuum, going from modality-specific (i.e., specialized for processing information originating from a particular sensory-motor channel, such as the eyes, the ears, and the nose) to heteromodal (i.e., functional hubs that process information combined across multiple sensory-motor channels). In general, modality-specific areas are not strongly connected to each other (i.e., visual areas are not strongly connected to hearing or smell areas); they are primarily connected to mid-level areas that specialize in processing particular types of information – let us call them “experiential features”, for lack of a better term – often combining information from more than one channel (e.g., “shape” combines visual and touch information; “motion” combines visual and auditory information; “space” combines visual, motor, touch, and vestibular information; “flavor” combines smell and taste; and so on). Heteromodal hubs, on the other hand, are strongly interconnected – in fact, they are more strongly connected with each other than they are with any modality-specific areas. They sit at the top of the cortical hierarchy, with experiential feature-specific areas immediately below them, and modality-specific areas at the base. In our study, we found that these heteromodal cortical hubs were among the areas that most strongly represent multimodal word meaning information, which supports our previously published proposal of a hierarchical system of convergence zones for concept representation, as illustrated.

Figure 2: A hierarchical model of the functional organization of cortical convergence zones, based on Fernandino et al. (2016). Cerebral Cortex, 26(5).

Despite the fact that heteromodal hubs are strongly interconnected, other studies from our group (not included in this paper) as well as studies of high-level perception by other groups indicate very clearly that multimodal experiential information is not uniformly represented across these regions. That is, experiential feature information is represented redundantly but not to the same degree in different hubs.

Study Summary & Support

Summary of Tong, J.-Q., Binder, J. R., Humphries, C. J., Mazurchuk, S., Conant, L. L., & Fernandino, L. (2022). . The Journal of Neuroscience, 37(42), 7121–7130.

The study was co-authored by Jia-Qing Tong, Jeffrey Binder, Stephen Mazurchuk, Colin Humphries, Lisa Conant, and Leo Fernandino, with generous support from the Department of Neurology of the ʼһ, the National Institutes of Health, and the . We thank Samantha Hudson, Jed Mathis, Sidney Schoenrock, and the study participants for their help.