Elsevier

Cognition

Volume 98, Issue 3, January 2006, Pages 223-243
Cognition

Playing on the typewriter, typing on the piano: manipulation knowledge of objects

https://doi.org/10.1016/j.cognition.2004.11.010Get rights and content

Abstract

Two experiments investigated sensory/motor-based functional knowledge of man-made objects: manipulation features associated with the actual usage of objects. In Experiment 1, a series of prime-target pairs was presented auditorily, and participants were asked to make a lexical decision on the target word. Participants made a significantly faster decision about the target word (e.g. ‘typewriter’) following a related prime that shared manipulation features with the target (e.g. ‘piano’) than an unrelated prime (e.g. ‘blanket’). In Experiment 2, participants' eye movements were monitored when they viewed a visual display on a computer screen while listening to a concurrent auditory input. Participants were instructed to simply identify the auditory input and touch the corresponding object on the computer display. Participants fixated an object picture (e.g. “typewriter”) related to a target word (e.g. ‘piano’) significantly more often than an unrelated object picture (e.g. “bucket”) as well as a visually matched control (e.g. “couch”). Results of the two experiments suggest that manipulation knowledge of words is retrieved without conscious effort and that manipulation knowledge constitutes a part of the lexical-semantic representation of objects.

Introduction

A key is a small implement that is made of metal and cut into a special shape. A key is also associated with a lock or door. The function of a key is to fasten or unfasten a lock by turning its bolt. To this end, certain actions by hand and wrist movements are used. Even for such a simple object as a key we represent various kinds of knowledge: we not only know how the object looks or feels, but also what it is used for and how we use it. A critical question concerns what the underlying representation and structure of such knowledge is like.

Since the groundbreaking studies by Warrington (1975), many researchers have reported case studies of category-specific impairments as a source of insight into such knowledge representations. On tasks such as picture naming and word definition, patients show a disproportionate impairment with stimuli denoting living things (e.g. animals and fruits/vegetables) relative to non-living things (e.g. tools and utensils) or vice versa. The patterns of category-specific impairments seemed to indicate that lexical-semantic representations are not randomly organized, but have a certain underlying structure. Category-specific impairments have, hence, been considered a window into the structure of lexical-semantic representation.

A dominant view on category-specific impairments in the literature is the sensory-functional hypothesis, initiated by Warrington and Shallice (Shallice, 1988, Warrington and McCarthy, 1983, Warrington and Shallice, 1984) and further developed by other researchers (Farah and McClelland, 1991, Saffran and Sholl, 1999; see Forde & Humphreys, 1999, for review). The sensory-functional hypothesis claims that category-specific impairments arise because of a differential weighting of sensory and functional information in categories. In this view, sensory information, visual in particular, is important in differentiating between living things, while functional information is crucial in differentiating between non-living things. Thus, the distinction between knowledge of an object's perceptual characteristics and its function has played an important role in research on object representation in general.1

Although functional information is a crucial concept in explaining category-specific impairments, there is surprisingly little consensus about its definition. The operational definition of functional information has varied across studies, ranging from information about an object's usage to non-sensory information including encyclopedic information (e.g. Cree & McRae, 2003, p. 181).2

Postulating that the function of objects is a primary basis for categorization, Nelson (1973) noted, “Functional definitions will be found to vary in their complexity and abstraction from the earliest simple definitions in terms of action to definitions in terms of higher order properties of the most abstract type such as hormones or S–R connections” (p. 37). On this view, the “earliest simple” functional concept is derived from our intuitive interaction with an object. For instance, when we see a water gun, we think of pushing its trigger to squirt a stream of water. The action of pushing the trigger of a water gun is the “earliest simple” functional concept about a water gun. Yet, the physical relations implicated in the process of how the increase of the pressure in the water gun produces a curvilinear squirt of water or its chemical impact on a person's skin and his/her response to it would belong to higher abstract functional knowledge. Thus, functional knowledge comprises multiple characteristics, ranging from heavily perceptually based to highly abstract. Functional information has been generally considered as knowledge about the intended usage or purpose of an object, namely what an object is used for (“what for” knowledge). As such, functional knowledge has been regarded as conceptual in nature. However, we also have knowledge about how to use an object, or more exactly, how to manipulate an object to successfully carry out its intended usage (“how” knowledge). This type of functional knowledge is presumably grounded in sensory/motor experiences.

Supporting Nelson's view that even functional knowledge has a perceptual basis, Martin, Ungerleider, and Haxby (2000) define functional knowledge as the “information about patterns of visual motion and patterns of motor movements associated with the actual use of the object. As such, this information is as dependent on sensory experience as is information about the visual form. The difference is that functional information is derived from motor movements, and visual processing of motion, rather than visual processing of form” (p. 1028). In this sense, functional information is not necessarily characterized as more abstract, conceptual or verbal than perceptual information.

As evidence for this view of functional information, Martin and colleagues showed in a series of neuroimaging studies that there is category-specific activation in the ventral premotor cortex (VPMCx) and the posterior middle temporal gyrus (PMTG) for the retrieval or recognition of manipulable artifacts such as tools and utensils (e.g. Chao and Martin, 2001, Martin et al., 1996; see Martin & Chao, 2001, for review; also see Grabowski, Damasio, & Damasio, 1998). The middle temporal gyrus is well known to be sensitive to visual motion information and the premotor area is involved in processing motor movements. Thus, although not conclusive, the activation pattern in these areas is consistent with the hypothesis that information about patterns of visual motion and motor movements may be relevant for the retrieval or recognition of man-made objects. Patterns of visual motion and motor movements reflect the manner of interaction with objects, namely how we use an object for its intended usage. Hence, in addition to “classically functional” information (“what for”), manipulation information (“how”) appears to be an important part of the lexical-semantic representation of man-made objects, and this sensory/motor-based functional knowledge seems to be represented in the VPMCx and the PMTG.

More directly addressing the multi-dimensionality of functional knowledge, some neuropsychological case studies using picture identification, definition or semantic judgment tasks have shown dissociations in which functional knowledge could be differentiated into usage-based “what for” and manipulation-based “how” (e.g. Buxbaum et al., 2000, Sirigu et al., 1991). Reporting on two apraxic patients, JD and WC, Buxbaum et al. (2000) investigated their patients' knowledge about the function and manipulation of manipulable objects. Apraxia has been regarded as a window into manipulation-based “how” knowledge since apraxic patients are impaired at performing and comprehending hand/body movements in spite of lack of any muscular problems–sometimes even in the presence of intact “what for” knowledge about an object (Buxbaum, Schwartz, & Carew, 1997). JD and WC performed the picture version of the Function and Manipulation Triplets Test (Buxbaum & Saffran, 1998), in which they were asked to view three pictured objects and select the two that were most similar to one another with respect to three conditions: function (e.g. “record player”, “radio”, “telephone”), manipulation (e.g. “typewriter”, “piano”, “stove”) and function and manipulation together (e.g. “roller”, “paintbrush”, “screwdriver”).3 Both patients' performance was significantly worse in the manipulation condition (JD: 3/14, 21% correct; WC; 7/14, 50%) compared to the function condition (JD: 16/20, 80%; WC 20/20, 100%). Their performance on the function and manipulation condition was intermediate (JD: 14/20, 70%, WC: 12/20, 60%). This strong relationship between apraxia and manipulation knowledge deficits suggests that sensory/motor representations are involved not only in comprehending and producing voluntary movements, but also in thinking about them.

Nevertheless, it is worth noting that Buxbaum and colleagues used an explicit task. In other words, participants were explicitly asked to retrieve appropriate types of knowledge. It is possible that the dissociations they noted in their patients reflected a failure to appropriately use heuristic information or strategies to group the words appropriately. Thus, their study does not address whether manipulation knowledge would be accessed in an implicit task, and additionally, whether this knowledge is automatically accessed without conscious effort, reflecting its status as an intrinsic part of the lexical-semantic representation of objects.

In their neuroimaging studies, Martin and colleagues used various types of tasks including passively viewing pictures, silently naming pictures and reading written names, and found a consistent activation pattern. These results show a stable involvement of certain neural areas in manipulation knowledge and are suggestive of automatic access to manipulation knowledge at the neural level. Although very suggestive, neural activations in the sensory/motor areas during object identification are not direct evidence that manipulation knowledge is activated during object identification. Therefore, it is necessary to examine activation of manipulation information in a more direct manner: If clear effects of manipulation knowledge on cognitive processing can be shown in an implicit behavioral task in which manipulation knowledge is not explicitly called for, then it will not only provide evidence for manipulation knowledge as an intrinsic part of object representation, but will also buttress the neuroimaging results by Martin and colleagues. Yet, to date there is little behavioral evidence for the automatic activation of manipulation information in an implicit task.

Thus, the present study aimed to investigate the retrieval of manipulation knowledge in implicit behavioral tasks in normal subjects, using two different experimental methods in different modalities. The first experiment used a lexical decision task in the auditory modality and the second experiment tracked participants' eye movements to a picture display while they mapped speech input to a picture in the display.

A lexical decision task has been commonly used in studies on lexical-semantic processing (e.g. Meyer and Schvaneveldt, 1971, Milberg and Blumstein, 1981, Swinney et al., 1979) and provides a measure of priming, namely an effect of processing facilitation based on a relationship between the prime and target words (e.g. a faster reaction time latency to the target ‘dog’ following a semantically related prime such as ‘cat’ compared to a semantically unrelated prime such as ‘cup’). Priming is a robust finding demonstrated for words and pictures (e.g. Carr et al., 1982, Moss et al., 1995, Vanderwart, 1984) and has been interpreted as reflecting the organization or operation of processes within the lexical-semantic network. Therefore, the priming paradigm has been a useful tool for research on the lexical-semantic interconnections among the units of the network.

A priming paradigm was used in Experiment 1 to determine whether significant priming would obtain in an implicit task where the relationship between the prime-target pairs was based on shared manipulation features between objects that are otherwise semantically dissimilar (e.g. ‘piano’–‘typewriter,’ ‘key’–‘screwdriver’). Usually, prime and target words used in a priming experiment are semantically related or semantically associated with each other. Thus, it is useful to determine whether common manipulation features would lead to a priming effect when the prime-target pairs are not otherwise semantically or associatively related.

Experiment 2 used an eye tracking paradigm. Eye movement techniques have been increasingly used to study lexical processing due to several advantages (e.g. Allopenna et al., 1998, Dahan et al., 2001, Tanenhaus et al., 1995, Yee and Sedivy (in preparation)). First, eye movements can be measured without disrupting speech or requiring participants to make a metalinguistic judgment. Second, the typical task requirements for the participant are to either look at or point to an object in the display. Thus, participants can engage in a naturalistic task. Third, eye movement techniques provide fine-grained and continuous temporal information, allowing for monitoring the temporal course of lexical-semantic processing.

For the present study, we hypothesize that participants will show an effect of processing facilitation even in implicit tasks based on the common manipulation features between objects. Both Experiments 1 and 2 investigate manipulation knowledge indirectly without explicitly asking participants to access this manipulation knowledge. In other words, manipulation knowledge about objects is not task-relevant. Furthermore, the tasks are directed at the lexical level (lexical decision and speech-to-picture-mapping) that functions as a “mediator” to the semantic representation of objects. Thus, a consistent pattern of results across these two different implicit tasks will provide strong behavioral evidence for the importance of manipulation knowledge in the lexical-semantic representation of objects.

Section snippets

Experiment 1

Experiment 1 investigated whether response time (RT) latency in an auditory lexical decision task reflects the activation of common manipulation features. If manipulation features are shared by object concepts due to their similar manner of manipulation (e.g. piano and typewriter), a priming effect would be expected for word pairs that denote those objects. Thus, the relatedness between a prime and a target in Experiment 1 is based on the common manipulation features among objects that are

Experiment 2

Experiment 2 aimed to replicate the findings of Experiment 1 using a different experimental method (eye tracking) and a different paradigm (speech-to-picture mapping). The goal of Experiment 2 was to determine whether eye movements would show sensitivity to an object related to a given spoken target in terms of manipulation features, and if so, when this happens.

Eye movement techniques have been increasingly used in research on lexical processing since they continuously monitor on-going

General discussion

Experiments 1 and 2 used different stimulus modalities as well as different experimental paradigms. Despite these differences, both Experiments 1 and 2 showed evidence of activation of shared manipulation features across otherwise semantically dissimilar stimuli. In Experiment 1, this was observed as a facilitatory priming effect, and in Experiment 2, as a heightened tendency to fixate objects with shared manipulation features. This pattern of results suggests that sensory/motor-based

Acknowledgements

This research was funded by NIH Grant DC00314 to Sheila E. Blumstein and NIH Grant R01 MH62566-01 to Julie C. Sedivy. We would like to give special thanks to Eiling Yee for her tremendous support throughout these projects, and also to Jason Taylor and William Heindel for their valuable feedback. We also would like to thank the anonymous reviewers for their helpful comments. Address reprint requests to Jong-yoon Myung, Department of Cognitive and Linguistic Sciences, Box 1978, Brown University,

References (45)

  • T.C. Bates et al.

    PsyScript: A Macintosh application for scripting experiments

    Behavior Research Methods, Instruments, and Computers

    (2003)
  • L.J. Buxbaum et al.

    Knowing “how” vs. “what for”: A new dissociation

    Brain and Language

    (1998)
  • L.J. Buxbaum et al.

    The role of semantic memory in object use

    Cognitive Neuropsychology

    (1997)
  • L.J. Buxbaum et al.

    Function and manipulation tool knowledge in apraxia: knowing “what for” but not “how”

    Neurocase

    (2000)
  • T.H. Carr et al.

    Words, pictures, and priming: On semantic activation, conscious identification, and the automaticity of information processing

    Journal of Experimental Psychology: Human Perception and Performance

    (1982)
  • L.L. Chao et al.

    Representation of manipulable man-made objects in the dorsal stream

    Neuroimage

    (2001)
  • G.S. Cree et al.

    Analyzing the factors underlying the structure and computation of the meaning of Chipmunk, Cherry, Chisel, Cheese, and Cello (and many other such concrete nouns)

    Journal of Experimental Psychology: General

    (2003)
  • M.J. Farah et al.

    A computational model of semantic memory impairment: modality specificity and emergent category specificity

    Journal of Experimental Psychology: General

    (1991)
  • E.M.E. Forde et al.

    Category-specific recognition impairments: a review of important case studies and influential theories

    Aphasiology

    (1999)
  • P. Garrard et al.

    Prototypicality, distinctiveness, and intercorrelation: Analyses of the semantic attributes of living and nonliving concepts

    Cognitive Neuropsychology

    (2001)
  • A.M. Glenberg et al.

    Grounding language in action

    Psychonomic Bulletin and Review

    (2002)
  • D. Graff

    The North American new text corpus (CD-ROM)

    (1995)
  • Cited by (0)

    View full text