Elsevier

Acta Psychologica

Volume 118, Issues 1–2, January–February 2005, Pages 171-191
Acta Psychologica

Selection-for-action in visual search

https://doi.org/10.1016/j.actpsy.2004.10.010Get rights and content

Abstract

Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action–intention effect within the biased competition model of visual selective attention.

Introduction

A widely investigated question in the field of cognitive science concerns the selection mechanisms that enable to concentrate visual processing on some aspects of the environment. In this study we explore the dependence of spatial cognitive processes on action intentions. This issue can be addressed in a so-called visual search task in which the observer searches for a pre-specified target among an array of non-targets. Recently, it has been found that a specific action intention about what to do with the searched object, i.e. grasping the object or pointing at it, affects the way how people search the objects in their visual space (Bekkering & Neggers, 2002). In this study we focus on the limitations and targets of this process. We demonstrate that an action intention can determine how people are searching for objects in the space. However, under which conditions or at which level of cognitive processing this effect occurs is yet unknown.

Neurophysiological studies suggest that up until a certain level individual features are processed independently (e.g. Maunsell and Van Essen, 1983, Moutoussis and Zeki, 2002, Zeki, 1973, Zeki, 1977). In this study we test if the intention to execute a goal-directed movement has an effect at the level of independent or interdependent feature processing. However, first we introduce the two in our view most relevant theories about visual attention concerning our research question: the biased competition model and the selection-for-action approach.

A nowadays dominant model accounting for selective attention is the theory of biased competition (Desimone, 1998, Desimone and Duncan, 1995, Kastner and Ungerleider, 2001). This model describes the interplay between bottom-up and top-down sources of attention. Its basic idea is that visual objects in the scene compete for representation, analysis and control of behavior. This competition results from limitations in processing capacity. On the one hand, the bottom-up input from the visual scene determines the spatial distribution and feature attributes of objects. While processing this information, a target could “pop-out” due to a bottom-up bias to direct the attention towards salient local inhomogeneities. On the other hand, top-down processes can bias competition towards behaviorally relevant information, based on the goals of the individual. In its current form, the biased competition model does not make specific predictions about the role of action intention as a modulator of attention, but it could be easily adapted to do so. (See also Birmingham & Pratt, 2005, for further information on the organization of spatial attention.)

More explicitly, the functioning of a perceptual system may be seen as gathering and integrating the sensory information in order to adapt to the environmental conditions in which the action must take place. It is essential for the preparation of the planned action. This idea is reflected in different models claiming a close interaction between conscious visual processing and motor behavior (e.g. Allport, 1987, Gibson, 1979, Hommel et al., 2001, Neumann, 1987, Neumann, 1990, Rizzolatti and Craighero, 1998).

In everyday situations people hardly ever search for objects in their environment just for purely perceptual purposes. In most cases, they have a clear intention to do something with the object they are searching for. Hence, it would make sense to change the relative weights given to different attributes of a visual object depending on the action currently at hand or planned for the immediate future. For instance, if the intention is to find a dictionary on the bookshelf in order to take it from the shelf, the weight given to the processing of various features might be different compared to a situation where one’s intention is just to find the dictionary to ascertain that it is there. In the first case, selectively more weight would be given to processing the information about its size and relative orientation in the space than in the second case, because this information is relevant for preparing a grasping movement. If the intention is to only detect the presence of the dictionary, it’s orientation in space is less important.

Critically, the selection-for-action approach assumes that there are no limitations to perceive multiple objects, but only limitations of effector systems to carry out multiple actions concurrently (e.g., Allport, 1987, Allport, 1990). Thus, competition for processing resources can be assumed to take place in the action system. Consequently, information about different attributes of an object should be bound together in a way that allows the purposeful use of that object according to the intended action. Therefore, selective attentional processing reflects the necessity of selecting information relevant to the task at hand. Convergent evidence for the existence of an action-related attentional system emerges from several experimental paradigms. For instance, Craighero, Fadiga, Rizzolatti, and Umiltà (1999) demonstrated that if the subject has prepared a grasping movement, then a stimulus with congruent orientation is processed faster. In addition, a common selection mechanism for the saccadic eye movements and object recognition was found in a study by Deubel and Schneider (1996). Finally, clinical studies with neglect patients have shown that object affordances can improve the detection of visual objects (Humphreys & Riddoch, 2001) and that action relations between objects can improve the detection of both of them (Riddoch, Humphreys, Edwards, Baker, & Wilson, 2003). Recent experimental support for the selection-for-action notion in visual search comes from the study by Bekkering and Neggers (2002) mentioned above. They demonstrated a selective enhancement of orientation processing (compared to the color processing) when the task required grasping of an object in relation to pointing toward the object. This finding is in line with the idea that visual perception handles the world in a way that is optimized for upcoming motor acts rather than just for a passive feedforward way of processing.

The aim of the present study was to examine one of the central remaining issues of the action–intention effect reported by Bekkering and Neggers (2002), namely about the limitations and targeted processes: does the action–intention have an effect only on the action-relevant feature or does it bias the competition between both features. Bekkering and Neggers found that participants were better able to discriminate the orientation of the stimuli when they had to grasp a target stimulus compared to the condition where they had to point to the target, since the relative orientation in space is more important for the grasping preparation than for the pointing preparation. This suggests that the behaviorally relevant feature can be processed more efficiently. At the same time the discrimination accuracy of color did not depend on the motor task, as the color of the object should be equally relevant for both grasping and pointing. However, to be convinced that the comparison of orientation and color discrimination is valid, the discrimination task of one feature should be equal to the discrimination difficulty of the other feature. Notably, in Bekkering’s and Neggers’ experiment the color discrimination performance was in general better than the orientation discrimination performance, suggesting that color discrimination could in principle have been easier than orientation discrimination. Therefore we first wanted to replicate the previous findings while controlling the discriminability of the two object features within a refined experimental set-up. First, 2D images projected by LCD projector on a screen were used as stimuli instead of 3D objects. This enabled a fine matching of orientation and color contrasts of target and non-target elements to make the orientation and color discrimination equally difficult in the first experiment and to control the decrease of color contrast in the second experiment. Second, the implementation of 2D stimuli allowed a direct visual template cueing of both color and orientation of the target, while orientation was cued auditorily in the 3D set-up of Bekkering and Neggers (2002). Third, the flexibility of target positioning was increased. Finally, the 2D screen allowed using a larger set size to manipulate the search difficulty.

The target was a conjunction of color and orientation. Participants were required either to search and point toward the target or to search and grasp the target. We measured the accuracy of the initial saccade. As in grasping the orientation of the target is more important than during pointing, we expect selectively improved performance on the discrimination of this feature. As the target’s color is equally relevant for both actions we expect no such change for this feature.

In first experiment, the set size was changed to simultaneously vary the amount of bottom-up information for both the behaviorally relevant (orientation) and the behaviorally neutral (color) visual feature. Increase of set size increases the difficulty of the search task (Bundesen, 1990) and this increases the load on cognitive processing. A decreased effect of action–intention under the larger set size should indicate that there are no recourses left for selective enhancement of the behaviorally relevant feature. This would indicate that the effect of action–intention is limited by processing capacity. However, if the effect of action–intention does not depend on the set size, no capacity limitations can be assumed. We expected that the selective enhancement of one specific action-related feature is a function of the load on cognitive processing.

Further, we were interested, if the top-down bias toward behaviorally relevant feature has an effect only at the level of this particular feature, or does it affect the processing level common for both features. In the second experiment a similar conjunction search task was used, yet the discriminability of behaviorally neutral feature was decreased, and the discriminability of behaviorally relevant feature remained the same as in the first experiment. If the action–intention affects only processing of the behaviorally relevant feature, the effect should not depend on the discriminability of the behaviorally neutral feature. However, if the action–intention somehow affects the competition between two features (or some other common mechanism), the effect on visual search should decrease, because overall target–non-target discriminability is diminished. Our hypothesis is based on the assumption, that the capacity of cognitive processing is limited, thereby causing a competition for it amongst features. In an attempt to create an unbiased situation in terms of bottom-up information about feature discriminability in the first experiment, we made the search for color and orientation approximately equally difficult. In the second experiment we purposefully decreased the color contrast and thereby made color discrimination harder. In this situation, the color discrimination requires more processing capacity compared to the relatively higher color contrast as used in Experiment 1. If this additional color processing capacity can be taken from the available orientation processing capacity, the possibility to bias the orientation processing in the grasping condition should be decreased, leading to a decreased enhancement of orientation processing in grasping compared to pointing. However, if the effect of action–intention operates before the feature binding, the discriminability of color should have no effect on the capacity used for orientation processing. In conclusion, if the previously found action-related enhancement is indeed related to biased competition between the features involved, the effect should appear under equal and relatively easy discriminability of both features (Experiment 1) and should decrease if the discriminability of one feature is decreased (Experiment 2).

Section snippets

Experiment 1

The aim of this experiment was to test whether the task-dependent facilitation of one feature (orientation in the grasping condition) is limited by task difficulty. This question directly derives from the results obtained in the past experiment. The original Bekkering and Neggers (2002) study showed a maximal action–intention effect for 7 stimuli compared to the 4 and 10 set size conditions. Hence, the amount of bottom-up information was manipulated directly by set size. The smaller set size

Experiment 2

In the second experiment we wanted to explore this interplay between bottom-up and top-down sources from another perspective. Specifically, we aimed to test further, if the enhancement of a behaviorally relative feature appears at the level of individual visual features or at the level of conjunction processing where the individual features are competing with each other. To do so, we manipulated the discriminability of color, the feature that should be equally relevant for both pointing and

General discussion

The aim of this study was to investigate the biasing effect of action–intention on selective attention in more detail. We corroborated the finding that the intention to grasp an image of an object selectively enhances processing of the orientation of that object compared with a condition in which the task is to reach and point to the object. Moreover, we now show that this selective enhancement occurs even when the task is a rather unnatural pantomimic act and the object is a 2D object without

References (39)

  • S.M. Zeki

    Colour coding in rhesus monkey prestriate cortex

    Brain Research

    (1973)
  • A. Allport

    Selection for action: Some behavioral and neurophysiological considerations of attention and action

  • A. Allport

    Visual attention

  • H. Bekkering et al.

    Visual search is modulated by action intentions

    Psychological Science

    (2002)
  • D.H. Brainard

    The Psychophysics Toolbox

    Spatial Vision

    (1997)
  • C. Bundesen

    A theory of visual attention

    Psychological Review

    (1990)
  • F.W. Cornelissen et al.

    The Eyelink Toolbox: eye tracking with MATLAB and the Psychophysics Toolbox

    Behavioral Research Methods, Instruments, and Computers

    (2002)
  • L. Craighero et al.

    Action for perception: A motor-visual attentional effect

    Journal of Experimental Psychology: Human Perception and Performance

    (1999)
  • R. Desimone

    Visual attention mediated by biased competition in exstrastriate visual cortex

    Philosophical Transactions of the Royal Society of London B

    (1998)
  • Cited by (58)

    • Does the motor system contribute to the perception of changes in objects visual attributes? The neural dynamics of sensory binding by action

      2019, Neuropsychologia
      Citation Excerpt :

      In the same vein, Craighero et al. (1999) found that preparing for a grasping movement facilitated the perception of the visual stimulus serving as go signal for the action, but only when the visual stimulus and the action shared the same orientation (see also Wykowska and Schubö, 2012). A perceptual advantage of preparing a motor action was also found in tasks requiring the selection of a multidimensional target in a set of visual stimuli, as when selecting the visual target with the correct orientation while performing a grasping movement (Bekkering and Neggers, 2002; Fagioli et al., 2007; Hannus et al., 2005). Altogether these results suggest that a specific action intention can enhance the visual processing of object attributes by enhancing the perceptual dimensions that are relevant for the control of the respective action (Fagioli et al., 2007; Wykowska et al., 2009) or biasing the competition between different visual attributes (Hannus et al., 2005).

    • Sensorimotor contingency modulates breakthrough of virtual 3D objects during a breaking continuous flash suppression paradigm

      2019, Cognition
      Citation Excerpt :

      Both STC and PPSMC predict that visual phenomenology, including the phenomenology of objecthood, will be systematically shaped by how a perceptual system masters, or encodes, sensorimotor contingencies. Accompanying these theoretical developments, a growing body of empirical research has investigated embodied approaches to cognition and perception (Bekkering & Neggers, 2002; Brunel, Carvalho, & Goldstone, 2015; Chan, Peterson, Barense, & Pratt, 2013; Fagioli, Hommel, & Schubotz, 2007; Hannus, Cornelissen, Lindemann, & Bekkering, 2005; Lindemann & Bekkering, 2009). In one striking example, Dieter, Hu, Knill, Blake, and Tadin (2014) found that blindfolded participants who waved their hands in front of their faces reported experiencing visual sensations, demonstrating that actions may not only shape visual perception but may even possess a generative effect on visual experience.

    • Grasp preparation modulates early visual processing of size and detection of local/global stimulus features

      2017, Cortex
      Citation Excerpt :

      Early behavioural experiments (Craighero, Fadiga, Rizzolatti, & Umilta, 1999) demonstrated that the processing of a visual stimulus is facilitated if the stimulus has the same orientation as a prepared grasping action. Subsequent evidence for motor-visual priming has compared grasping and pointing movements and demonstrated that the processing of object size is selectively enhanced during grasp preparation (Fagioli, Hommel, & Schubotz, 2007) as well as processing of object orientation (Bekkering & Neggers, 2002; Gutteling, Kenemans, & Neggers, 2011; Hannus, Cornelissen, Lindemann, & Bekkering, 2005). These findings suggest that action preparation may tune incoming sensory information to the perceptual features relevant for the upcoming action, resulting in a bias in visual processing to match the prepared action.

    • The impacts of banner format and animation speed on banner effectiveness: Evidence from eye movements

      2016, Computers in Human Behavior
      Citation Excerpt :

      Third, the opposite attention results of their fixation frequency and our contact frequency may be because these two studies were conducted in different experimental modes: Kuisma et al. (2010) used task mode, while we used surfer mode. According to the previous research, visual selection attention is influenced by intentions of the action (Hannus, Cornelissen, Lindemann, & Bekkering, 2005). A specific action intention is more likely to lead users to focus attention on the action-relevant information (Bekkering & Neggers, 2002).

    • Embodied Seeing: The Space Near the Hands

      2015, Psychology of Learning and Motivation - Advances in Research and Theory
    View all citing articles on Scopus
    View full text