Despite the remarkable inclination of adults towards food, young infants only show a sweet tooth and an aversion to bitter taste (Mennella, Pepino, & Reed, 2005). Young infants, in fact, learn what is edible and what is not through experience, by mouthing a great deal of items and by receiving feedbacks from adults; as such, food preferences and choices are highly influenced by a broad array of cultural forces (Fallon, Rozin, & Pliner, 1984; Rozin & Fallon, 1980). Rozin et al. (1986), for instance, gave to 54 children, ranging in age from 14 months to 5 years, 30 items to eat including normal food that adults eat as well as items from adult rejection categories (disgust, danger, inappropriate, and unacceptable combinations). The youngest children of the group showed a clear tendency towards accepting (i.e., mouthing) exemplars from all the categories. However, from 16 months to 5 years of age, children begin to reject items considered disgusting or dangerous by adults, even though it is not known whether children and adults reject food on the same grounds. It is only until after the age of 5 years that children seem to clearly undertake adult rejections of inappropriate combinations of food items (Fallon et al., 1984).

From the age of 6 months on, infants also seem able to understand the meaning of common food-related words in their native language when pronounced with the presentation of the corresponding pictures (Bergelson & Swingley, 2012). Around the same age, infants also seem to develop the perception and understanding of feeding actions earlier than other manual actions, as measured by goal anticipations (Kochukhova & Gredebäck, 2010). This enriched understanding of feeding actions has an obvious evolutionary advantage as it facilitates the access to food. These processes continue until adulthood to shape our food preferences and habits.

For humans, choosing food is not an easy task. Unlike koalas who chew only eucalyptus leaves or pandas who consume only bamboo, humans are omnivorous. Rozin (1976) was the first to introduce the concept of the omnivore’s dilemma that describes the state of anxiety experienced by individuals when they have to decide what to eat, especially in affluent countries where there is an excessive availability of food. What does guide our feeding choices? First, just by visually perceiving foods, we seem to be able to extract information about their sensory properties, such as the calorie content and quantity, as well as the level of transformation the foods underwent or the context in which they are normally eaten. This is possible because perception relies on previously acquired knowledge (see Martin, 2009). Mental processes about food are also likely to be influenced by other characteristics of the food itself (e.g., health value, attractiveness) as well as by the observer’s temporary internal states such as level of satiety, immediate energy needs (Ottley, 2000), and blood-sugar concentration (Simmons et al., 2013). Finally, more stable characteristics of the perceiver have been found to modulate how food-related information is processed. For instance,  research demonstrated that the neurophysiology of reactivity to food cues is also shaped by individual body mass index (BMI, e.g., Hume et al., 2015; Toepel et al., 2012).

In this article we will first introduce some prominent theories on how conceptual knowledge is represented in the human brain, mainly referring to neuropsychological studies and then, with those theories in mind, we will discuss the nature of the food category with particular attention to its subdivision into natural (i.e., fruit/vegetables) and manufactured (i.e., food that underwent some kind of organoleptic transformation). After brain damage, patients’ ability to recognize food-stuff might be expected to diminish disproportionally or be selectively affected. However, as we will discuss in detail below, depending on the theory, we may expect patients to be impaired in recognizing food as a whole category of edible items, including both natural food and manufactured food (the domain-specific hypothesis, DSH), or to show a deficit for living things including natural food, or non-living things, including manufactured food (the sensory-functional hypothesis, SFH). Moreover, patients’ recognition deficits may be limited to a single sensory modality or extend to multiple modalities. The way in which the information about food breaks down in brain-damaged patients can explain how food concepts might be represented in the brain, and whether and how they guide individuals’ behavior and choices.

Recognizing an object is a process whereby we identify things that have been previously experienced through one or more sensory modalities. The explanation of how object recognition works largely depends on how we think semantic memory is organized in the brain. We define semantic memory as a store of conceptual knowledge about different types of objects and conspecifics. There is a large consensus that object recognition benefits from some degree of organization in semantic memory (e.g., Mahon & Caramazza, 2009). Cognitive neuropsychologists proposed different organizational principles or constraints based on the observation of associations or dissociations of deficits in brain-damaged patients (Caramazza, 1986; Shallice, 1988). An association of deficits affecting more abilities of patients is frequently observed after brain damage, and might indicate that the impaired abilities rely on the same process/subsystem that is damaged by the lesion. In contrast, if a given ability is impaired while another is preserved in at least one patient (while the opposite pattern is observed in at least another patient), then one may conclude that the particular process/subsystem underlying the former is not causally dependent on the process/subsystem underlying the latter.

Sensory-functional hypothesis

McCarthy, Shallice, and Warrington were the first neuropsychologists to report well documented category-specific deficits. Warrington and Shallice (1984) studied four patients who suffered from herpes simplex encephalitis (HSE), and showed impaired visual recognition and auditory comprehension of living things including animals, fruit, vegetables, and plants, but also some prepared foods, while their processing of non-living things (vehicles, toys, household tools, clothes, objects, and musical instruments) was spared (see also Table 1). In contrast, V.E.R., a patient suffering from global dysphasia caused by an infarction in the territory of the middle cerebral artery of the left hemisphere described by Warrington and McCarthy (1983), showed reduced auditory-visual comprehension for objects but not for animals, food, and flowers (see also Table 1). These first well documented observations confirmed earlier, more anecdotal reports (Hecaen & Deajuriaguerra, 1956; McCrae & Trolle, 1956; Nielsen, 1946). Based on these dissociative patterns, McCarthy, Shallice, and Warrington (Warrington & McCarthy, 1983; 1987; Warrington & Shallice, 1984) hypothesized the existence of two putative modality-specific semantic subsystems: one subsystem would represent sensory properties of objects, like color, texture or taste, while the other would represent functional properties, like their prototypical use and functions they allow. According to the SFH, processing living things critically depends on the subsystem for sensory information, while processing non-living things critically depends on a subsystem for functional properties. The category-specific semantic deficits would arise as a consequence of damage to either the former or latter subsystem. Subsequently, Borgo and Shallice (2001; 2003) proposed that sensory-information (i.e., color and texture but not shape) differentially influence the processing not only of living things but also of sensory-quality categories such as materials, edible substances, and drinks. Thus, patients with a damaged sensory-semantic subsystem should always show an association of deficits affecting all living things as well as the sensory-quality categories.

Table 1 Patients with category-specific deficits affecting or sparing food are reported. Section a - Cases with disproportionally impaired recognition of natural food (fruit/vegetables), manufactured food, and other living things. Section b - Cases with disproportionately better recognition of natural fruit/vegetables and manufactured food. Section c - Cases with disproportionately better or worse recognition of natural (fruit/vegetables) and/or manufactured food than animals. Section d - Cases with disproportionally impaired recognition of natural food (fruit/vegetables) relative to manufactured food. When the list of items was not provided or manufactured and natural food were not analyzed separately only the word “food” is used

Although semantic category-specific deficits occur often in the absence of damage to the structural description system (Caramazza & Mahon, 2006, for a review), some patients may also exhibit category-specific deficits at the pre-semantic level of processing, in addition to category-specific semantic deficits. Building up on the original formulation of SFH, it has been proposed that, due to visual similarity, perceptual crowding among the structural descriptions might be responsible, through a cascade of processes, for naming deficits with living things (Humphreys & Forde, 2001; Humphreys, Riddoch, & Quinlan, 1988; but see Laws & Gale, 2002; Laws & Neve, 1999 for a different opinion).

The domain-specific hypothesis

The category-specific semantic deficits shown by brain-damaged patients that have been published over the years did not always satisfy the SFH assumptions (for a review, see Capitani et al., 2003).Footnote 1 First, the living/non-living distinction does not seem to be universal, with the ability to recognize living things breaking down in a finer grain. Indeed, patients with a disproportionate or selectively impaired processing of animals (Blundo, Ricci, & Miller, 2006; Caramazza & Shelton, 1998), fruit/vegetables (Hart, Berndt, & Caramazza, 1985; Laiacona, Barbarotto, & Capitani, 2005; Samson & Pillon, 2003), conspecifics (Ellis, Young, & Critchley, 1989; Miceli et al., 2000), social groups (Rumiati et al., 2014), or non-living things (Laiacona & Capitani, 2001; Sacchett & Humphreys, 1992) have been described. Moreover, Laiacona, Capitani, and Caramazza (2003) published the case of an HSE patient, E.A., who exhibited a poor recognition of living things and preserved recognition of sensory-quality categories (i.e., liquids, substances, and materials). This latter study in particular challenges the view that damage to the semantic-sensory subsystem necessarily causes recognition deficits affecting all living things and the sensory-quality categories (Borgo & Shallice, 2001; 2003).

To account for these and other inconsistencies in the literature, Caramazza, Mahon and Shelton proposed the Domain-Specific knowledge Hypothesis (DSH) (Caramazza & Shelton, 1998; Caramazza & Mahon, 2003; 2006; Mahon & Caramazza, 2009; Mahon & Caramazza, 2011). They argued that evolutionary pressure has imposed an organization of the conceptual knowledge in object domains. These innate organizational constraints on the central nervous system allow a more efficient recognition. Given the critical role played by evolution, it is argued that the information in the brain is represented in categories that are important for the survival and fitness of individuals such as animals, plants, conspecifics, and maybe tools. In agreement with DSH, Mahon and Caramazza (2009) formulated several predictions about the category-specific semantic impairments that can possibly be observed after brain damage. First, these deficits should reflect the organization of the conceptual knowledge in those categories that are evolutionarily relevant; second, they should affect all kinds of knowledge concerning the damaged category; and third, since the categorical constraints are innately specified, category-specific semantic deficits should emerge from early damage to the brain. Independent research in different cognitive domains with infants supports the claim that there are innately specified constraints in the organization of conceptual knowledge about, for instance, objects, numbers, and conspecifics (see Carey, 2011, for an extensive overview). Based on a series of considerations, Mahon and Caramazza (2011) elaborated on the DSH and renamed it Distributed Domain-Specific Hypothesis (DDSH). They reasoned that the pattern of neural responses in higher order areas is driven not only by physical input but also by the way in which it is interpreted. Moreover, this interpretation is not expected to occur in a single region of the ventral object processing stream but to depend on its connections with the other regions in the brain. Mahon and Caramazza (2011) argued that the integration of information about object identity is mediated by innately determined patterns of connectivity that have been selected for being evolutionary relevant domains of knowledge (or categories of objects). In their new vision, a domain-specific neural system is a network of brain regions in which each region processes different types of information about the same domain. Thus, the organization by category in the ventral stream reflects the visual structure of the worlds but also the way in which the ventral stream is connected with the rest of the brain. According to the DDSH, an integration of visual information with that about taste or odor would be relevant for food recognition and less so for other categories of objects (tools or animals). The neural basis of this connectivity has been proposed to be the white matter, although the authors warned that this aspect of the theory requires further development (Mahon & Caramazza, 2011).

Now that we have briefly outlined the main tenets of SFH and DSH/DDSH, we turn our discussion to the specific case of food category.

Food category as addressed in single case studies

Caramazza and Shelton (1998, p. 5) argued that apple, corn flakes, carrot, pizza, hamburger, and Sacher-cake not only look different but they also serve different functions, and that discriminating among them could be not so much based on their visual appearance but on their functions: for a snack, for breakfast, in the salad, for dinner, as fast food, for a party etc. One might also contend that recognition of food items cuts across the natural/manufactured distinction. It is therefore of theoretical interest to establish whether this distinction is relevant and whether natural food behaves like living things, and manufactured food like non-living things (see Capitani et al., 2003, p. 225, for a preliminary discussion on this issue). According to SFH, concepts about natural food such as an apple would be expected to be best characterized by sensory information (e.g., taste, color, texture, consistency, etc.) rather than by functional information (e.g., the occasion on which a particular food is normally consumed, the procedures followed for its preparation, etc.). On the other hand, concepts about manufactured food, such as pasta, because of its characteristic of being handmade, could be best characterized by functional information rather than sensory information. Thus, damage to the subsystem that represents sensory properties is expected to reduce patients’ ability to recognize not only living things but also natural food while sparing the ability to recognize non-living things and manufactured food. In contrast, damage to the subsystem that represents functional properties should reduce patients’ ability to recognize non-living things as well as manufactured food, leaving the ability to recognize natural food and other living things unaffected.

There are alternative hypotheses and predictions. First, as sensory properties might be relevant for recognition of food as a whole, damage to the sensory subsystem could give rise to a deficit affecting both natural and manufactured food. Second, the same prediction – recognition deficit of natural and manufactured food as a consequence of brain damage – can be generated from the DSH but on different premises. The category food, in fact, is a good candidate for being one of those that emerged through evolution given its relevance for the survival of our species.

Preliminary evidence about the way in which the food category might be represented in the brain can be derived from already published single case studies. The majority of brain-damaged patients with category-specific deficits showed predominantly impaired knowledge about manufactured food and natural food (i.e., fruit/vegetables) as well as about some other living things such as animals, flowers or plants, while knowledge about non-living things (e.g., tools, furniture, means of transportation, etc.) was unaffected or the least affected (see Table 1, Section a). This is the case of the four patients (I.N.G., J.B.R, K.B., and S.B.Y.) first reported by Warrington and Shallice (1984), and later also tested by other groups. Neuropsychologists subsequently reported patients with a similar pattern: L.A. (Gainotti & Silveri, 1996; Silveri & Gainotti, 1988), S.B. (Sheridan & Humphreys, 1993), and Felicia (De Renzi & Lucchelli, 1994). The remaining two patients included in Table 1, Section a are F.B. (Sirigu, Duhamel, & Poncet, 1991), who revealed greater naming difficulties for food and animals than for tools, as well as bizarre food-related behaviors such as eating raw potatoes and frozen food, and M.U. (Borgo & Shallice, 2001; 2003) who disclosed poor performance on several lexical-semantic tasks with natural food, manufactured food, liquids, uncountable substances, and animals; however, performance with tools was relatively normal. In all these patients the deficit affected not only the recognition of food but also that of other living things (e.g., animals and flowers), sparing non-living things. Taken together, these findings would imply that the manufactured food has a lot in common with fruit and vegetables (see also Capitani et al., 2003, p. 225).

In contrast with the above patients, K.E. (Hillis et al., 1990) and V.E.R. (Warrington & McCarthy, 1983) recognized natural and manufactured food normally (see Table 1, Section b). Specifically, K.E. was better with food as a whole than with non-living things on six different tasks tapping lexical-semantic processes, while V.E.R. performed normally on the auditory-visual comprehension task with food, flowers, and animals but not with objects.

However, there are another two patients who double-dissociate in recognizing on the one hand animals and on the other both natural and manufactured food or only natural food (see Table 1, Section c). The first patient, J.J., performed poorly on naming and comprehension tasks with both natural and manufactured food relatively to animals (and vehicles) (Hillis & Caramazza, 1991). The second patient, E.W., presented a disproportionate recognition deficit of animals with spared recognition of natural food and non-living things. In Table 1, Section d we report on patients M.D. (Hart, Berndt, & Caramazza, 1985) and P.S. (Hillis & Caramazza, 1991), whose naming was better with manufactured food and non-living things than with fruit and vegetables (animal recognition was spared in M.D. and impaired in P.S.).

The available neuropsychological literature shows mixed patterns. Table 1, Section a seems to indicate that food is represented together, irrespective of whether it is natural or manufactured, based on the fact that patients could recognize or failed to recognize food as a whole. Moreover, the fact that all nine patients with impaired food recognition also showed a deficit in recognizing animals, insects, flowers, and drinks, leaving the recognition of non-living things unaffected, would suggest the presence of damage to the putative subsystem supporting recognition of living things, all foods included.

Three cases reported in Table 1, Sections c and d, however, challenge this interpretation: indeed, recognition of either natural food or both natural and manufactured food double-dissociate from recognition of animals, suggesting that living categories break down in a finer grain. To make things even more complicated, in M.D. and P.S. impaired recognition of natural food dissociated from normal recognition of manufactured food (see Table 1, Section d). The apposite pattern, namely impaired recognition of manufactured food with spared recognition of natural food, has not been documented to date.

The cases summarized in Table 1 do not allow us to draw firm conclusions to date. Moreover, the number of food stimuli employed in the reviewed studies was in many instances too small, the stimuli belonging to the different categories did not always match for relevant variables, and different patients were tested with different stimuli. The nature of food representation clearly requires to be further investigated using stimuli from different theoretically pertinent categories, matched for relevant concomitant variables as well as for other more specific food properties such as calorie content, level of transformation, etc. (see the last section of the present paper for a discussion dedicated to these variables).

Recognition by modality

A very popular view of how our knowledge is organized in the brain is conveyed by the embodied cognition hypothesis. The tenet of this hypothesis is that object concepts are grounded in perception and action systems (Barsalou, 2008; Martin, 2009). Early evidence in favor of this view primarily comes from neuroimaging studies especially in domains such as color (Chao & Martin, 1999; Martin et al., 1995) and action (e.g., Hauk, Johnsrude, & Pulvermuller, 2004; Tettamanti et al., 2005). In the color domain, for instance, generating color associates in response to achromatic pictures or to their written names was found to activate regions in the ventral temporal cortex close to other regions typically responding to low-level visual motion processing (Martin et al., 1995).Footnote 2 In the action domain, silently reading or listening to verbs denoting actions (e.g., to kick), compared with psychological verbs (e.g., to wonder), implicated fronto-parietal regions that are normally activated when the corresponding actions are actually performed (Hauk et al., 2004; Tettamanti et al., 2005). In line with the embodied approaches, these different phenomena are taken as proof that the format of the corresponding concepts is modality-specific. In contrast, the disembodied critics prefer to interpret the sensorimotor activations as being the consequence of spreading activation between conceptual (i.e., amodal) representations and the sensorimotor system (see Mahon, 2015).

Food has not escaped this debate. In an fMRI study with healthy individuals, for instance, Simmons, Martin, and Barsalou (2005) showed that the visual presentation of food activates, in addition to visual areas, multiple sensory areas. Specifically, viewing appetizing photographs of food (relativelto buildings), in addition to the regions in the visual cortex implicated in object shape, also activated two regions – the right insula/operculum and left orbitofrontal cortex (OFC) – that are close to the gustatory cortex. In particular, the right insula/operculum has been found to be implicated when people actually taste different substances compared with neutral substances (Kringelbach, de Araujo, & Rolls, 2004), while the OFC has been found to be associated with taste reward values (e.g., O'Doherty et al., 2001). According to Simmons et al. (2005), their findings indicate that the gustatory system does not respond just to actual food, but also to the pictures of food even when participants processed them superficially. This is because the brain areas representing knowledge for a given category are the same typically used to process its physical instances, thus grounding conceptual knowledge in modality-specific brain areas.

What should we expect to observe in the event of brain damage? Different to the disembodied approach, with embodied views it seems difficult to account for patients’ dissociations between different sensory modalities (vision, taste, or olfaction) in food recognition.

The study by Luzzi et al. (2007) allows us to indirectly test these contrasting hypotheses. These authors had four groups of patients with different forms of neurodegenerative disease perceive and recognize odors and pictures.Footnote 3 Patients with semantic dementia (SD, n = 8), Alzheimer disease (AD, n = 14), fronto-temporal dementia (FTD, n = 11), and corticobasal degeneration (CBD, n = 7) completed ‘The Odor Perception and Semantics Battery’ that comprises the following tasks: odor discrimination (assessed by asking whether two odors were the same or different), odor naming, odor-picture matching, picture naming, and word-picture matching.

Of particular interest for the argument we are developing here are the results shown by AD patients (Luzzi et al., 2007). AD patients revealed impaired odor discrimination, a deficit suggested to be an early feature of this disease (see Liberini & Parola, 2001; Martzke et al., 1997; Mesholam et al., 1998 for reviews), but they also obtained poor scores on the odor-naming and odor-picture-matching tasks. In contrast, AD patients performed normally on the confrontation naming and word-picture matching tasks with the same stimuli as in the odor perception/comprehension tasks. In summary, AD patients’ performance described by Luzzi et al. (2007) suggests that they did not suffer from a generalized semantic disorder. More importantly, AD patients’ behavioral pattern does not seem to support the notion, consistent with the grounded hypothesis, that preserved knowledge about edible items in one sensory modality (e.g., odor or taste) is somewhat necessary in order to correctly recognize them when presented in a different modality (e.g., visual); on the other hand, this pattern of results is best accommodated within the disembodied approach.

Future investigation is warranted in order to establish whether brain damage may give rise to the reverse dissociation, namely visual agnosia with preserved odor recognition. Moreover, we suggest that item analysis on the stimuli employed in different tasks (e.g., naming and comprehension) and across modalities (e.g., vision and olfaction) should provide useful information about the locus of the deficit. For instance, if patients fail to name and comprehend some particular foods across tasks and modalities, their deficit is more likely to be due to degraded semantic representations of those food concepts.

Deficits concerning eating behaviors

As mentioned earlier, the study by Luzzi et al. (2007) also included SD and FTD who tend to also experience feeding abnormalities, but what induces them might differ between the two groups. On “The Odor Perception and Semantics Battery,” SD patients showed a striking dissociation between normal odor perception and impaired recognition of odors. Similar patterns have also been observed in other studies involving SD patients (patients 1–3 in Piwnica-Worms et al., 2010; patient 2 in Rami et al., 2007). As SD patients were also impaired when performing a picture-naming and a word-to-picture-matching task, their pattern has been interpreted within a widespread semantic deficit (Luzzi et al., 2007). The loss of semantic knowledge could be held responsible for the dietary changes in the six SD patients who showed food fads and in the patient who showed a tendency to mouth inanimate objects.

On the same battery, FTD patients’ odor recognition was worse than that of controls but better than that of SD patients’, while their visual recognition was only mildly impaired (Luzzi et al., 2007). Nine FTD patients also had altered eating habits, with eight showing overeating and one having food fads. Thus, FTD patients’ eating disorders might have more to do with a deficient control and executive functions than with a loss of semantic knowledge. This interpretation is supported by findings from two other studies in which, using voxel-based morphometry, the authors were able to establish in FTD patients a link between eating abnormalities (binge eating, pathological sweet tooth) and a greater atrophy of the orbitofrontal-insular-striatal circuit (Withwell et al., 2007; Woolley et al., 2007).

Food-relevant properties

The investigation into food recognition requires a better understanding of the properties that are central to food concepts. Two of these properties are the transformation of food and the calorie content, which we discuss here in turn. The division of food into natural and manufactured categories is biologically sound. Natural food and manufactured food supply differential energy values, with manufactured food providing higher energy values. In “Catching Fire,” Richard Wrangham (2009) hypothesized that the evolutionary jump that around 300,000–400,000 years ago took us from Australopithecus to Homo erectus, occurred when our ancestors began to use the fire for cooking food. This important jump ahead was possible because cooking improved our ancestors’ diet by increasing the energy gain and, in turn, the brain volume and the potential for developing its abilities (Wrangham, 2009). Wobber, Hare, and Wrangham (2008) tested the hypothesis that hominids preferred cooked food by having several populations of captive great apes trying different raw or cooked food items. As results showed that apes much preferred cooked food, the authors concluded that hominids would also spontaneously prefer cooked food to raw. Moreover, Carmody, Weintraub, and Wrangham (2011) investigated the effects of unprocessed, pounded, and/or cooked diets on body mass and food preference in mice (Mus musculus), and found that increases in body mass were attributed to cooked starch and meat and not to food intake or activity levels. The increase in energy conveyed to the animals by cooked food was greater than that provided by pounding meat or starch-rich tubers. These results were replicated when food preferences were analyzed in fasted mice (Carmody et al., 2011). Thermal and non-thermal techniques applied to food are also practiced ubiquitously by humans because, in addition to an increase in energy gain, processing food increases its palatability and edibility, and it considerably reduces the chances of infection (see also Carmody & Wrangham, 2009). Furthermore, distinguishing raw from cooked food avoids ingesting foods that are poisonous or toxic if eaten raw.

Even though food recognition concepts may normally engage different sensory properties (e.g., vision, smell, taste, etc.), vision alone carries a great deal of information about food that can inform and guide our feeding decisions. This is apparent in everyday life where most of our food-related decisions are strongly based on visual cues and was confirmed in a study in which, using electrical neuroimaging of visual evoked potentials, Toepel et al. (2009) demonstrated how the human brain differentiates high-energy food (high-fat) from low-energy food (low-fat) at ~165 ms post-stimulus onset, and subsequently at ~300 ms post-stimulus onset. In the first processing stage (~165 ms), response differences were distributed across a wide brain network that included posterior occipital regions and temporo-parietal cortices normally implicated in object recognition, as well as the inferior frontal cortices typically associated with decision making. In the subsequent processing stage (~300 ms), responses differed as far as topography and strength were concerned, mainly within prefrontal cortical regions implicated in reward assessment and inferior frontal cortices involved in decision making.

Interestingly, these effects occurred orthogonally to the task performed by the participants, suggesting that the food’s energetic content is a reward property that is processed rapidly and automatically. Similarly, in an fMRI study, a cluster of activation within the medial and dorsolateral prefrontal cortex as well as in the diencephalon was observed in response to high-calorie foods relative to low-calorie foods (Killgore et al., 2003). High-calorie foods yield activation in regions that are important for evaluating the biological relevance of the stimulus and the anticipation of a reward.

Get started with the stimuli

The importance of the level of transformation of food and its calorie content as well as of other properties has been duly acknowledged in one study in which 86 healthy participants rated about 877 pictures, of which one-third depicted food (natural and manufactured), while the remaining depicted objects (kitchen utensils, clothes, tools, and scenes) and natural non-edible things (rotten food, natural non-edible items, and animals) (Foroni et al., 2013). Thus variables that specifically refer to food items were included such as perceived calorie content, perceived level of transformation, and perceived distance from edibility (i.e., the work required to bring a given food to an edible form: raw fish generally requires more work than fruit). The study also assessed the individual item’s perceived valence, typicality, familiarity, ambiguity, and arousal – also measured in other studies (e.g., Toepel et al., 2009; Nummenmaa et al., 2012). In addition, the authors also controlled for size, brightness, and high frequency power of the stimuli and collected linguistic variables such as frequency, naming, and length (Foroni et al., 2013).

The inter-correlational analyses performed on the ratings clearly indicate that, compared to non-edible natural stimuli and objects, food stimuli are cast out as being different (see Tables 2, 3 and 4). In general, based on the variables rated for all item categories, food (Table 2), objects (Table 3), and natural items (Table 4) showed similar correlation patterns with only some magnitude differences. However, they also demonstrated some interesting differences when the correlations involved arousal. First, different to objects and non-edible natural items, the level of arousal induced by food items (Table 2) did not correlate with brightness, familiarity, typicality, and ambiguity ratings, and neither did brightness with valence. On the other hand, both objects (Table 3) and natural items (Table 4) were rated as being more arousing when less ambiguous (r = −0.20, −0.39, respectively), objects were rated as more arousing when less familiar (r = −0.10); natural items, instead, were considered more arousing when more typical (r = 0.43). Second, and relevant for the food stimuli (Table 2), correlations were also calculated on the ratings for perceived calorie contents, perceived distance from edibility, and level of transformation. Thus, arousal ratings were found to correlate positively with perceived calorie content and level of transformation (both r = 0.66), and negatively with distance from edibility (r = −0.43). These findings indicate, not surprisingly, that food stimuli are rated as more arousing when they are rated as containing more calories, to be more manufactured, and to require less work before being consumed. The level of arousal seems to be somewhat associated with the desire to immediately take in a food item. Likewise, the negative correlation between valence and distance from edibility (r = −0.36) suggests that participants rated more positively food that requires less work in order to be eaten. Perceived calorie-content also correlated with level of transformation (r = 0.88) and distance from edibility (r = -0.38), and the latter two also correlated with each other (r = –0.43). Finally, perceived calorie content positively correlated with the actual calorie content (r = 0.73).

Table 2 Correlation between validation dimensions for food (natural food and manufactured food, N = 252). From Foroni et al. (2013)
Table 3 Correlation between the validation dimensions for objects (artificial food-related objects and artificial objects, N = 418). From Foroni et al. (2013)
Table 4 Correlation between the validation dimensions for natural objects (rotten-food, natural-non-food item, animals, and scenes, N = 207). From Foroni et al. (2013)

This database (available at http://foodcast.sissa.it/neuroscience/) has already provided the stimuli for several studies of which one is relevant to the scope of the present review. Rumiati, Foroni, and colleagues (Rumiati et al., under review) had 14 patients with AD and nine with primary progressive aphasias (PPA) as well as 30 healthy controls perform a confrontation-naming task, a categorization task, and a comprehension of edible (natural and manufactured food) and non-edible items (tools and non-edible natural things) task. Overall, controls were more accurate than patients, and PPA patients were generally more impaired than AD patients, especially in the naming task. More specifically, compared with controls, patients were better at naming edible items than non-edible items, while relative to controls they did not show an advantage for manufactured food over natural food. Interestingly, food calorie content was found to be the best predictor of controls’ naming performance and to negatively correlate with age of acquisition of food names. One possible interpretation of the naming findings is that brain damage possibly impinged on the ability of both patient groups to assess calorie content. In a fourth task the same pictures of natural and manufactured food were presented together with a description of food’s sensory or functional properties that could be either congruent or incongruent with that particular food. Results showed that performance on sensory trials was always more accurate than in functional trials, irrespective of whether food was natural or manufactured.

Conclusions

Food is essential for our survival and it has acquired additional relational and cultural meanings. Nevertheless, our understanding of the way knowledge about food is organized in the human brain is still rather limited. We set out to review the available neuropsychological studies with the aim of analyzing whether and how the food category breaks down. We made the case, as others did before (see Capitani et al., 2003; Caramazza & Shelton, 1998), that food stuff is theoretically interesting for testing existing theories of semantic knowledge, because it cuts across living and non-living things or domains. Firstly, we observed that, in most cases, the category-specific deficits affect recognition of natural food, manufactured food, and living things, sparing recognition of non-living things. However, there are also a couple of patients whose ability to recognize natural and transformed food dissociates (see Table 1), and others in which recognition of animals and recognition of food dissociate. Secondly, we also reported preliminary findings suggesting that lexical-semantic processing of food stimuli seems to be influence also by food-intrinsic properties. Thirdly, based on existing neuropsychological evidence (Luzzi et al., 2007), the integrity of the representation of foods in one sensory modality does not seem to be necessary for recognizing them in a different sensory modality, in contrast with what might be expected based on imaging findings (Simmons et al., 2005).

We would like to conclude by acknowledging that more research is necessary to better explain the role played in recognition by variables associated with food, such as the level of transformation of food, its calorie content, and the BMI of the perceiver.