In our everyday life, successful behavior often depends on the coordination between perception and action. It is well established that our actions are strongly guided by visual perception: Activities such as reaching, grasping, and pointing to objects are performed more accurately and faster when they occur within our visual field and, in particular, when presented within the current focus of our visual attention (e.g., Adam, Buetti, & Kerzel, 2012b; Castiello, 1999; Ma-Wyatt & McKee, 2007). Effects of action on visual perception, in turn, are more subtle and have only recently been studied (e.g., Bekkering & Neggers, 2002; Fagioli, Hommel, & Schubotz, 2007; Vishton et al., 2007; Wohlschläger, 2000). Emerging from this research is growing evidence that visual attentional mechanisms are affected by concurrent action planning (Baldauf & Deubel, 2008; Baldauf, Wolf, & Deubel, 2006; Fischer & Hoellen, 2004) and the position of our hands (Abrams, Davoli, Du, Knapp, & Paull, 2008; Reed, Betz, Garza, & Roberts, 2010; Reed, Grubb, & Steele, 2006; see also Adam et al., 2008). We briefly review the most relevant findings.

During the preparation of goal-directed hand movements, perceptual processing has been found to be biased toward action-relevant locations. Baldauf and colleagues (2006) claimed that this “selection-for-action” involves attention being spread to all goal locations in parallel. Similar deployment of attention was also observed during the preparation of coordinated bimanual movements (Baldauf & Deubel, 2008). These studies add to the body of work that demonstrates that preparing hand gestures influences visual processing. For instance, preparing for reaching or grasping facilitates the processing of location or size, respectively (Fagioli et al., 2007). Similarly, planning of grasping responses drives attention to selected objects, whereas preparing pointing responses allocates attention to space (Fischer & Hoellen, 2004). In addition, Symes, Tucker, Ellis, Vainio, and Ottoboni (2008) employed a change-blindness paradigm while instructing participants to hold one of two response devices. They found that the action being prepared (power vs. precision grasp) facilitated the detection of the changed object when it was congruent in size (large vs. small). Moreover, physiological recordings in nonhuman primates found visuo-tactile neurons that respond to both visual stimuli and motor feedback (efference copy signals) from the body (Graziano & Gross, 1998; see also Andersen, Snyder, Bradley, & Xing, 1997, for a review).

Interactions between action and attention are not restricted to action preparation. Further support for the close interplay between attention and action comes from the effect of static hand postures on spatial attention. Reed and colleagues (2006) studied whether the location of one’s resting hand affected attentional selection. Participants placed one hand on a computer monitor and were faster in detecting probes near their hand (see also Adam, Bovend’Eerdt, van Dooren, Fischer, & Pratt, 2012a). Abrams and colleagues (2008) measured a variety of attention-related effects while observers held the display monitor, using both hands. They found steeper visual search slopes, greater inhibition of return, and a stronger attentional blink, as compared with the traditional condition, with separated visual and motor spaces (i.e., vertical monitor and horizontal keyboard). Attempting to determine what processes are affected by hand posture, Gozli, West, and Pratt (2012) used a similar hand posture manipulation to examine performance in tasks demanding either spatial or temporal processing. They found that placing the hands near the display improved performance in temporal tasks while attenuating performance in spatial tasks, suggesting that hand posture biases activity toward either the magnocellular (near hands) or the parvocellular (far hands) visual pathway.

Taken together, these studies have started to uncover details of the interplay between manual actions and visual processing. However, this work has generally segmented the normally continuous stream of movement into discrete units of analysis. In other words, the focus has been on static hand postures or single actions. This convenience-driven methodological practice limits our knowledge about attention deployment during continuous movements in more realistic tasks. For example, swiping the finger across the surface of a tablet PC while scrolling through a text has no specific aiming requirements; the resulting absence of continuous error correction raises the possibility of a fundamentally different attention deployment process.

Recently, a few studies reexamined the online influence of action on perception (for recent reviews, see Brockmole, Davoli, Abrams, & Witt, 2013; Tseng, Bridgeman, & Juan, 2012). One such example is Adam and colleagues (2012a), who studied the effect of hand proximity on letter identification performance while participants adopted a bimanual posture (static) or performed a movement (dynamic) underneath a display. Results confirmed and extended earlier findings of improved probe identification near the hand (near-hand effect) to bimanual continuous movements. Another example is a study by Jackson, Miall, and Balslev (2010), which examined the direct effect of proprioceptive cues on attention allocation. They found that applying a direction perturbation (left or right) during forward reaching movements improved the detection of probes presented in the perturbation direction. Both studies illustrate that proprioceptive information regarding the current hand posture can affect the distribution of spatial attention during the execution of hand movements. However, it remains unclear whether and how other characteristics of hand dynamics, such as (1) the continuously changing proximity of the hand to the probe, (2) the direction of hand movement, or (3) the time course of the movement, can effect visual selection.

To explore these issues, we combined visually concealed continuous hand movements (Adam et al., 2012a) with an attentionally demanding letter discrimination task (Braun & Julesz, 1998) that was presented contingent upon the course of hand motion. Our participants were required to move their (concealed) right hand back and forth, from side to side, under a display. During the hand movement, a brief visual probe stimulus appeared contingent upon the hand passing through one of six positions. We hypothesize that if attention is driven only by the near-hand effect, probe discrimination should depend on hand proximity to the probe alone. However, if attention is also driven by hand movement direction, probe discrimination should additionally depend on this factor.

Method

Participants

A convenience sample of 11 participants (age: 20–34 years,; 4 male, all right-handed) with normal or corrected-to-normal vision participated in the experiment. They gave written informed consent and were paid for their participation.

Apparatus

Participants were seated in front of a two-layered computer desk (see Fig. 1, left panel). Their right hand was placed on the keyboard shelf below a 22-in. LCD screen (65° × 41° usable field of view), which was set on the top layer of the desk, with an angle of 30° to the horizon. When viewing the screen from above (viewing distance, 35 cm), the right hand was invisible to participants. Hand position was monitored via a single-button Apple optical computer mouse that was held by the right hand and allowed hand-position-contingent probe onsets. The computer mouse was also used for recording participants’ responses. In 5 randomly selected participants, eye fixation was verified with the use of a head-mounted eyetracker (EyeLink II).

Fig. 1
figure 1

Photo and illustration of the experimental layout and display. Participants were seated in front of a computer desk with their right hand placed on the keyboard shelf under a tilted LCD screen. On each trial, observers moved their right hand from right to left and back. Eye fixation was monitored with a head-mounted eyetracker (left panel). Attentional probes were displayed when the right hand reached positions R (right), C (center), or L (left) during either leftward or rightward movement (right panel)

Stimuli

The experiment was programed and controlled in MATLAB. All stimuli were generated by using the Psychophysical Toolbox (Brainard, 1997; Pelli, 1997) and displayed in white on a black background. The attentional probe was a rotated T or L shape (size, 2.4° × 2.4°; eccentricity, 10.3°) that was presented either to the left or to the right of a fixation cross (size, 2° × 2°) that was shown continuously 6° below the display center (position C). After an individually adjusted stimulus onset asynchrony (SOA), the target was followed by an F-shaped mask at the same location as the probe, thus obscuring the probe’s identity.

Procedure

On every trial, participants were required to move their hand once from the right side to the left side under the computer screen and back (thus covering a distance of 45 cm twice). Before each movement, two short audio tones (1200 Hz) were played with an interval of 1,200 ms, used for both cuing participants to initiate the hand movement and indicating the time from the start to the reversal of the movement, thus prescribing a movement speed of 37.5 cm/s. During the hand movement, the visual probe was presented briefly, followed by a mask. In order to prevent a direct fixation on the probe, we used short SOAs (typically <100 ms) that were individually adjusted through an adaptive staircase procedure. On each trial, the probe was displayed either in the lower left or in the lower right location of the screen with one of six equiprobable onset times: The probe appeared either with the hand reaching position R, C, or L while moving to the left side of the screen or with the hand reaching position L, C, or R while moving back toward the starting position under the right edge of the screen (Fig. 1, right panel). After movement completion, the two probe alternatives were presented near fixation, and participants indicated the probe identity with a mouse click. Trials exhibiting saccadic eye motion during the hand movement were discarded.

This procedure led to a total of 24 different trial conditions (2 probe positions × 2 letter probes × 6 hand positions). Each block consisted of 30 trials: 24 trials with probe presentation (1trial per condition) and 6 additional trials without probes. This paradigm enables the examination of the influence of both hand proximity (near-hand effect) and hand movement direction (movement direction effect) on the allocation of covert attention.

Participants were trained for at least 1–2 h on performing the hand motion and probe discrimination task before data collection. Participants started with an SOA value of 250 ms that was either decreased or increased by 50 ms if performance in the previous block exceeded 85 % correct discriminations or undercut 65 % correct discriminations, respectively. The training ended when participants performed probe identification at 75 % correct with SOA values <200 ms. However, since participants’ performance could further improve, this staircase procedure continued during testing. Each participant was tested for 3–5 h each, on separate days over a period of 1–2 weeks. This resulted in 1,200–1,500 trials per participant.

Results

Data from participants with and without eye tracking were very similar and were averaged together. Experimental trials with movement times <1.4 or >3.0 s or with SOAs >220 ms were excluded to ensure homogeneity of performance and to prevent contamination from probe-directed eye movements (3 % of all the data). Average movement time was 2.1 s (SD = 0.13), and average SOA was 85 ms (SD = 18). Mean probe discrimination performance across participants as a function of the time course of hand position (along the x-axis) is shown in Fig. 2, separately for the two probe positions.

Fig. 2
figure 2

Probe discrimination performance. Performance on trials with left probe location (open circles) or right probe location (full circles), depending on hand position (x-axis, proportional to time on trial). Each circle denotes average performance (with SE)

A repeated measures analysis of variance (ANOVA) was conducted on the mean performance in target discrimination, with hand position (six levels) and probe location (two levels) as within-subjects variables. We found only a main effect of hand position regardless of probe location, F(5, 50) = 6.29, p < .005 (M = 77.5 %, 76.9 %, and 74.9 % for the R, C, and L positions, correspondingly, when participants moved their hand leftward and 80.7 %, 77.6 %, and 76.9 % for L, C, and R positions, correspondingly, when they moved their hand rightward during the latter part of the motion course).

Trials were then classified with regard to the proximity between probe location and hand position (near, intermediate, and far proximity) and with regard to the direction of hand movement relative to the probe (toward, away; Fig. 3a). For example, if the hand moved leftward, triggering a right-side probe onset while passing under the screen center, this constituted an intermediate–away condition. A repeated measures ANOVA evaluated effects of hand proximity and hand direction on discrimination performance.

Fig. 3
figure 3

Illustration of trial classification and results according to hand proximity and direction of motion at probe onset (a, c). Illustration of trial classification and performance according to the time course of hand movement and probe location in relation to hand movement direction (b, d)

The main effect of hand proximity was not significant, F(2, 20) = 0.187, p > .5, but there was a significant main effect of hand movement direction, F(1, 10) = 7.160 p < .025. Probe discrimination was better when the hand moved toward the probe (e.g., hand moving rightward when the probe appeared on the right side) than when the hand moved away from the probe (78.4 % and 76.8 %, respectively). Importantly, we found a significant interaction between hand proximity and hand movement direction, F(2, 20) = 10.59, p < .001. Figure 3c shows a trend for a near-hand superiority effect for movements away from the probe (black bars), but not for movements toward the probe (white bars). In the latter condition, probe discrimination is best when the hand is in far proximity and decreases as the hand moves closer to the probe (clearly opposite to the near-hand advantage). Together, these findings document that probe discrimination is affected by hand movement when the hand is in far proximity from the probe.

We further classified the two directional components of each trial with regard to the time course of hand movement on which the probe was displayed (start, intermediate, or end) and the probe location with respect to the spatial position of the movement (near movement start point or near movement-end point; see Fig. 3b). We found that performance was highest when the probe was displayed at movement start and decreased significantly with movement time course, F(2, 20) = 6.301, p < .01 (M = 79.3 %, 77.3 %, 76.1 %). Moreover, performance was significantly better when the probe was presented near the movement endpoint, F(1, 10) = 7.160, p < .025 (M = 78.3 %, 76.8 %; Fig. 3d).

In order to assess whether the execution of the two-element hand movements was affected by the presence of the probes, we examined movement times across the screen (i.e., from position R to L or from position L to R, for leftward or rightward hand movements, respectively) in different trial conditions. We found significantly longer movement times on trials with probes than on trials without probes (M = 443 vs. 424 ms), t = −6.89, p < .01. Movement times with probes were also significantly shorter during rightward (return) hand movements, as compared with leftward (initial) hand movements (M = 435 vs. 451 ms), F(1, 10) = 8.58, p < .05. Most important, probe location did not affect movement time, F(1, 10) = 0.21 p > .5.

Discussion

This study evaluated visual discrimination performance during continuous hand movements in order to determine whether attention deployment is systematically related in space or time to movements without spatial targeting requirements. Our main finding is a strong modulation of the previously established near-hand effect by the direction of hand movement. While previous work has either used static hands (e.g., Reed et al., 2010; Reed et al., 2006) or averaged across movement directions (Adam et al., 2012a), we showed an effect of hand movement direction when the hand was far from the probe location—namely, at the opposite side of the screen. In this condition, probe discrimination performance increased substantially when the hand moved toward the probe, as compared with when the hand moved away from the probe. The near-hand effect appeared weaker or absent and was overshadowed by this movement-direction-specific far-hand effect. This novel result is shown in Fig. 3 and suggests that when a movement is started, there is stronger cross-talk between hand movement and visual-spatial attention. This conclusion is in conflict with frequently cited results by (Posner and Keele, 1969, as cited in Ells, 1973), who found that the beginning and end of a manual movement require increased attentional capacity. However, because those authors studied goal-directed tracking movements (cf. Ells, 1973, p. 11), this discrepancy likely reflects the fact that nontargeting movements have different attentional control settings.

Other aspects of our results are in line with previous work. Specifically, probe discrimination performance was better when the probe was presented at the end location of the movement rather than at its start location. This observation is consistent with the idea that attention is shifted ahead to a discrete action-relevant location. The finding replicates earlier studies showing that covert attention is allocated during hand movement planning (e.g., Baldauf et al., 2006; Fischer, 1997; for a review, see also Baldauf & Deubel, 2010) and with recent work exploring the effect of hand position on covert attention (Reed et al., 2010; Reed et al., 2006). Specifically, our findings show that attentional allocation is driven not only by hand placement in a resting state (i.e., holding the hand still in a certain location within the display), but also in continuous manual motion when this motion is visually concealed.

Our findings are consistent with a bimodal neuronal integration mechanism that processes both visual information and motor feedback (efference copy signals) from the body (Graziano & Gross, 1998). This, in turn, provides an online, multisensory representation of visual information in peripersonal space centered on active body parts (see Graziano, 2001; Graziano & Gross, 1998) and is also involved in directing spatial attention (Bremmer, Schlack, Duhamel, Graf, & Fink, 2001; Halligan, Fink, Marshall, & Vallar, 2003). This bimodal integration mechanism has been made responsible for earlier findings of a near-hand advantage for visual attention in search, detection, and attentional blink tasks (cf. Abrams et al., 2008). More recently, it has also been proposed to account for the modulating effects of hand position in flanker interference tasks (Davoli & Brockmole, 2012). Our results suggest that this integration mechanism cannot account for attention deployment during continuous hand motion unless one assumes that continuous motion is less attention demanding than discrete aiming or static posturing, thereby allowing forward displacement of attention in the direction of the ongoing movement. Further study of this proposed mechanism may expand our understanding of information uptake in real-life situations, such as swiping movements and other manual interaction with hand-held devices—for example, smart phones and tablet PCs (Dufau et al., 2011; Miller, 2012).

To summarize, the results of our movement-contingent attentional probing method show that this approach is capable of discovering the dynamics of visual attention deployment relative to an ongoing movement. Attention is driven by the hand motion direction and is, therefore, shifted ahead of the current hand position at the start of movement. This mechanism complements changes in visual attention induced by a planned but not yet executed hand movement (Baldauf et al., 2006) and is related to proprioceptive information involving the ongoing motor execution.