Implicit visual learning and the expression of learning
Highlights
► Implicit visual learning is difficult to observe as it mainly affects encoding processes. ► Implicit visual and motor learning differ in terms of response times but not in terms of acquired knowledge. ► Implicit learning is not only influenced by the selective attention directed to stimuli but also to response dimensions. ► Directing attention to the relevant response dimension seems to affect explicit rather than implicit leaning processes.
Introduction
It is now widely accepted that implicit learning occurs in the absence of conscious awareness about the ongoing learning process itself and about the outcome of what is learned. An important paradigm to study implicit learning is the serial reaction time task (SRTT) originating from Nissen and Bullemer (1987). In this task, participants see marked locations on the screen which are mapped to corresponding keys. Participants are instructed to press the corresponding response key whenever an asterisk occurs at a certain location. Unbeknownst to the participants, the locations of the asterisk follow an underlying regular sequence. After several blocks of practice, participants are transferred either to a new, but still regular, or to a random sequence. This transfer block leads to a performance decrement that, after reintroducing the original regularity, immediately disappears. Importantly, participants are not able to explicate their acquired knowledge when asked to do so. Even with more sensitive tests, like, for instance, the recently introduced wagering task (Dienes and Seth, 2010, Haider et al., 2011, Persaud et al., 2007) or the process-dissociation procedure (Destrebecqz and Cleeremans, 2001, Haider et al., 2011, Jacoby, 1991), participants’ explicit knowledge remains usually rare. This dissociation between performance and expressible knowledge is generally assumed to indicate implicit learning.
More recently, research on implicit learning has started to investigate modality-specific implicit learning (e.g., Deroost and Soetens, 2006a, Deroost and Soetens, 2006b, Mayr, 1996, Nattkemper and Prinz, 1997, Remillard, 2003, Remillard, 2008, Remillard, 2011, Rüsseler and Rösler, 2000, Willingham, 1999, Willingham et al., 2000, Ziessler, 1994; for a rather complete review, see Abrahamse, Jiménez, Verwey, & Clegg, 2010). For instance, Mayr (1996; see also Deroost & Soetens, 2006a) has shown that participants can learn a sequence of object (stimulus) locations concurrently with an uncorrelated sequence of responses to the color of the stimuli. Likewise, Remillard, 2003, Remillard, 2008, Remillard, 2011 provided evidence for implicit learning of a pure visuo-spatial sequence that did not correlate with responses (see, also Deroost & Soetens, 2006b). And even more sophisticated, Goschke and Bolte (2007) report experiments showing that participants implicitly learn abstract categories while verbally naming pictures of category examples.
However, some findings exist which suggest that especially implicit visual and visuo-spatial learning are not always found. For example, the results of Nattkemper and Prinz (1997; see also Rüsseler and Rösler, 2000, Willingham, 1999, Ziessler, 1994) revealed that participants were not able to learn a visual sequence of stimuli embedded in a sequence of responses. Also, Deroost and Soetens (2006a, Experiments 2, 3 and 4) could not observe visuo-spatial implicit learning when they replicated Mayr’s (1996) experiments with a regular sequence of object locations combined with a random response sequence. They only found implicit learning effects when, as was the case in Mayr’s original experiment, both sequences built into the experiment were regular, even though they were uncorrelated. In addition, Willingham, Nissen, and Bullemer (1989) or Bischoff-Grethe, Goedert, Willingham, and Grafton (2004) could not find evidence for visual implicit learning. Instead, Willingham et al. (2000) provided overwhelming evidence that participants in the original SRTT learn response locations rather than a visual or visuo-spatial sequence of stimuli. Recently, Knee, Thomason, Ashe, and Willingham (2007) showed that this is particularly true for implicit learning, whereas explicit learning was based on stimulus locations.
Thus, implicit visual or visuo-spatial learning is sometimes found and sometimes not. Notably, two recent studies found clear evidence for implicit visual learning (Gheysen et al., 2009, Gheysen et al., 2010, Gheysen et al., 2011, Rose et al., 2011). They compared implicit visual and motor learning and observed rather small but reliable learning effects for implicit visual learning (a benefit of approximately 10–20 ms for regular compared to random material). Rose et al. (2011) more or less used the design of the original SRTT. However, they disentangled the sequence of stimuli and the response sequence. Participants were trained with short blocks either containing a pure visual sequence or random material (within participants), or they received short blocks with either a motor sequence or random material. The design of the two sequence conditions (visual vs. motor) was entirely identical with the only exception that either the sequence of stimuli or the sequence of responses was regular. In both conditions, the results revealed significant performance benefits for the regular sequence material. However, learning effects for the visual sequence condition were much smaller.
Likewise, Gheysen et al., 2009, Gheysen et al., 2010, Gheysen et al., 2011 also disentangled the sequence of responses and stimuli and either the color of the cues (visual sequence) or the responses (motor sequence) followed a regular sequence. From their small learning effects regarding visual sequence learning, they concluded that implicit visual learning is a rather slow learning process which is much more vulnerable than implicit motor learning (see also, Deroost & Soetens, 2006a).
However, it is not clear why visual sequences should be acquired more slowly than motor sequences (Keele, Ivry, Mayr, Hazeltine, & Heuer, 2003). At least two alternative possibilities can account for the differences between implicit visual and motor learning: The first point concerns not the learning process itself but differences in the difficulty to express the acquired knowledge. The second point concerns selective attention which also might contribute to the differences between implicit visual and motor learning.
To elaborate on the first point, implicit visual or visuo-spatial learning requires an arbitrary stimulus-to-response mapping in order to disentangle the visual sequence from motor responses. Consequently, acquired visual sequence knowledge might speed up the encoding of the stimuli but cannot automatically prime the next response as the stimulus-to-response mapping changes from trial to trial. By contrast, knowledge acquired in an implicit motor learning task automatically speeds up response processes as learned response-response associations directly prime the next responses. Thus, finding smaller response time benefits for implicit visual compared to implicit motor learning during training are ambiguous: It could mean that the implicit visual learning process itself is slower than an implicit motor learning process (Gheysen et al., 2011). Alternatively, it is also conceivable that not the learning processes themselves differ between visual and motor learning, but that the expression of the acquired knowledge in performance does.
Our second point that a difference in selective attention might have caused the smaller learning effects of implicit visual learning is based on Willingham et al.’s (2000) observation that implicit learning in the original SRTT is primarily response location based. Several researchers assume that implicit learning is modulated by selective attention (see, e.g., Cock et al., 2002, Deroost et al., 2008, Jiang and Chun, 2001, Jiménez, 2003, Jiménez and Mèndez, 1999, Jiménez and Mèndez, 2001, Jiménez et al., 2006, Jiménez and Vázquez, 2005, Jiménez and Vázquez, 2011). Selective attention can be said to be a prerequisite of implicit learning in the sense that the explicit task instruction (or the task set) must direct attention toward the dimension of the task containing the regular sequence. If this dimension is not part of the explicit task set, no implicit learning will occur (e.g., Eitam et al., 2009, Jiang and Chun, 2001, Tanaka et al., 2008). However, this research on selective attention mainly focuses on the question how selective attention towards stimuli influences implicit learning. Research concerning the role of attention towards response dimensions is rare, even though a task set not only refers to stimuli but also to response selection processes (e.g., Hommel, 2010). In the standard SRTT, participants usually are instructed to respond as fast as possible to a certain position on the screen with a spatially assigned response key. This might lead them to attend more to the spatial positions of the response keys rather than to the specific characteristics of the cues (Willingham et al., 2000). Consequently, implicitly learning a motor sequence might benefit from this focus of attention towards response locations. By contrast, in a visual sequence response keys are randomly mapped to the visual stimuli and thus, attention towards response locations might not enhance implicit visual learning. This difference in selective attention might explain why visual implicit learning shows smaller learning effects.
To summarize, the brief survey about the findings on implicit visual and visuo-spatial learning revealed a somewhat disparate picture: Sometimes it is not found and if it is observed it leads to only small learning effects. We proposed two reasons why this might be the case: implicitly acquired visual knowledge might differ from implicit motor sequence knowledge in terms of the difficulty to express the respective knowledge in behavioral measures like response times or error rates. Second, participants might devote more attention to the spatial locations of the stimuli (or response keys) than to their characteristics within an SRT learning task leading to smaller learning of the visual sequence.
Section snippets
Overview of the experiments
The goal of the current three experiments was to further investigate the characteristics of visual sequence learning. In particular, we focused on (a) the effect of selective attention and (b) potential differences in the difficulty to express the acquired knowledge as two possible reasons why visual implicit learning might have led to smaller learning effects than implicit motor learning.
All experiments used the SRTT paradigm of Rose et al. (2011). A colored target appeared on the screen
Experiment 1
As already mentioned, the main goal of Experiment 1 was to test our method. For this purpose, we compared the knowledge of participants who either received a visual sequence (regular sequence conditions) or random material (control conditions) during training. If the wager task provides a sensitive method to assess knowledge after training, we should find more knowledge in the regular sequence than in the control conditions. In addition, we included the Mouse and the Keyboard conditions as a
Experiment 2
After having shown that our methods provide a well suited basis for our research question, Experiment 2 now focuses on the comparison of implicit visual and motor learning. Again, we used the two different response devices as a manipulation of selective attention, leading to four different conditions: the Mouse-visual, Mouse-motor, Keyboard-visual, and Keyboard-motor conditions. Implicit knowledge was assessed off-line with the wager task.
If implicit visual learning processes are slower than
Experiment 3
Experiment 3 was aimed to further compare implicit visual and motor learning under different response devices. We used a within-participants design in which participants concurrently learned a visual and a motor sequence. Due to this within-design, it was possible to investigate the independence of implicit visual and motor sequence learning. A few studies already have shown that participants are able to concurrently learn a visuo-spatial and a motor sequence (e.g., Deroost and Soetens, 2006a,
General discussion
The experiments reported here were aimed at further investigating implicit visual and motor learning. Basically, the prior findings concerning implicit visual learning are somewhat ambiguous and when it is found, the learning effects are much smaller compared to implicit motor learning effects (Gheysen et al., 2009, Gheysen et al., 2010, Gheysen et al., 2011, Rose et al., 2011). As argued in the introduction, we suspected that two factors could have contributed to these small learning effects:
Acknowledgments
This research was supported by the German Research Foundation (DFG; HA-5447/2-1 und RO-2653/2-1). We thank Annette Bräutigam, Melina Koechlim, and Diana Lamsfuß for help with data collection.
References (62)
- et al.
Gambling on the unconscious: A comparison of wagering and confidence ratings as measures of awareness in an artificial grammar task
Consciousness and Cognition
(2010) - et al.
An old problem: How can we distinguish between conscious and unconscious knowledge acquired in an implicit learning task
Consciousness and Cognition
(2011) A process dissociation framework: Separating automatic from intentional uses of memory
Journal of Memory and Language
(1991)- et al.
Empirical support for higher-order theories of conscious awareness
Trends in Cognitive Sciences
(2011) Distracted and confused? Selective attention under load
Trends in Cognitive Sciences
(2005)- et al.
Attentional requirements of learning: Evidence from performance measures
Cognitive Psychology
(1987) - et al.
Implicit and explicit learning of event sequences: Evidence for distinct coding of perceptual and motor representations
Acta Psychologica
(2000) - et al.
Measuring consciousness: Relating behavioural and neurophysiological approaches
Trends in Cognitive Sciences
(2008) - et al.
Representing serial action and perception
Psychonomic Bulletin and Review
(2010) - et al.
Neural substrates of response-based sequence learning using fMRI
Journal of Cognitive Neuroscience
(2004)