Closing the eyes helps people to remember. When faced with a difficult task, people often spontaneously close their eyes or look away (Doherty-Sneddon, Bruce, Bonner, Longbotham, & Doyle, 2002; Doherty-Sneddon & Phelps, 2005; Glenberg, Schroeder, & Robertson, 1998). Furthermore, instructing individuals to close their eyes or avert their gaze from the experimenter’s face significantly improves their performance on a variety of cognitive tasks (Doherty-Sneddon, Bonner, & Bruce, 2001; Glenberg et al., 1998; Markson & Paterson, 2009; Phelps, Doherty-Sneddon, & Warnock, 2006; Wais, Rubens, Boccanfuso, & Gazzaley, 2010). Eyeclosure has also been found to improve memory for events. Wagstaff et al. (2004) found that closing the eyes enhanced participants’ memory of a past public event (Princess Diana’s funeral). Perfect et al. (2008) extended this research by examining the effect of eyeclosure on memory for everyday events. They found that eyeclosure improved memory for both live and videotaped events, tested in either free or cued recall. Perfect, Andrade, and Eagan (2011) showed that eyeclosure was effective in reducing false memories for a staged event, particularly when the interview environment was noisy. Mastroberardino, Natali, and Candel (2010) examined children’s memory of a fairly emotional event and found that children who closed their eyes gave more correct responses to questions about the event than did children who kept their eyes open. Vredeveldt, Baddeley, and Hitch (2011) investigated memory for a violent event and found that eyeclosure improved eyewitness memory, even when witnesses were tested after a delay of a week and several retrieval attempts. Thus, evidence is accumulating for the robustness of the eyeclosure effect.

The idea that closing the eyes helps memory is not new. Not only has it been expressed in popular media (for instance, in the 1969 song “Close your eyes and remember” by Minnie Riperton), but also it has been included in various interview procedures, such as hypnosis (Barber, 1969; Hibbard & Worring, 1981; Weitzenhoffer & Hilgard, 1962) and the cognitive interview (Fisher & Geiselman, 1992). The cognitive interview procedure involves a number of social and cognitive techniques and has been shown to substantially improve eyewitness memory (for meta-analyses, see Köhnken, Milne, Memon, & Bull, 1999; Memon, Meissner, & Fraser, 2010). The cognitive interview is widely used in police interviews in the U.K. (Clarke & Milne, 2001; Milne & Bull, 2003; Shawyer, Milne, & Bull, 2009), and similar techniques are used elsewhere (e.g., Clément, Van de Plas, Van den Eshof, & Nierop 2009; Fahsing & Rachlew, 2009). Nevertheless, a number of problems with its practical implementation have been reported, mainly relating to the complex and time-consuming nature of the procedure (Clarke & Milne, 2001; Kebbell, Milne, & Wagstaff, 1999; Milne & Bull, 2002). In response to these problems, various researchers have proposed simplified versions of the cognitive interview, which have been found to be just as effective as the full cognitive interview (Dando, Wilcock, & Milne, 2009a; Dando, Wilcock, Milne, & Henry, 2009b; Davis, McMahon, & Greenwood, 2005; Milne & Bull, 2002; Verkampt & Ginet, 2010). The studies on eyeclosure discussed above now suggest that even a method as simple as closing the eyes during the interview can have substantial benefits on eyewitness memory. The present article examines why eyeclosure improves memory. What are the mechanisms behind this effect?

Two main hypotheses have been put forward to explain the eyeclosure effect, which are not necessarily mutually exclusive. The first is the cognitive load hypothesis, which holds that closing the eyes improves memory by freeing cognitive resources that would otherwise have been involved in monitoring the environment. The hypothesis is based on the idea that people have a limited pool of cognitive resources (Baldwin, 1894; Cherry, 1953; Craik, 1948; Kahneman, 1970) and is grounded in Glenberg’s (1997) embodied cognition account. Glenberg proposed that the primary purpose of memory is to serve action. He construed memory retrieval and monitoring the environment as two competing tasks. When recollection is difficult, environmental monitoring must be suppressed to allow internal control over this complex cognitive process. Suppression is reflected in a number of behavioural indices, such as Kundera’s (1996) observation that a person engaged in effortful retrieval starts walking more slowly. The cognitive load hypothesis has been proposed to explain the memory benefits of both gaze aversion (Doherty-Sneddon et al., 2001; Doherty-Sneddon & Phelps, 2005) and eyeclosure (Perfect et al., 2008; Perfect et al., 2011).

A second potential explanation of the eyeclosure effect is the modality-specific interference hypothesis (cf. Wagstaff et al., 2004). This hypothesis holds that cutting out visual interference from the environment promotes visualisation of the witnessed event, which improves recall of visual details. Modality-specific interference has been researched extensively in the context of the multi-component working memory model (Baddeley, 1986, 2007; Baddeley & Hitch, 1974), which describes a central executive system that supervises two modality-specific subsystems (the visuospatial sketchpad and the phonological loop) and one multimodal subsystem (the episodic buffer; Baddeley, 2000). Over the years, evidence has accumulated supporting the idea that concurrent tasks in the same modality interfere more with each other than do tasks in different modalities (e.g., Allport, Antonis, & Reynolds, 1972; Baddeley & Hitch, 1974; Brooks, 1967, 1968, 1970; Postle, Idzikowski, Della Sala, Logie, & Baddeley, 2006; Segal & Fusella, 1970). The first step of the modality-specific interference hypothesis as an explanation of the eyeclosure effect holds that cutting out visual perception from the environment facilitates visual imagery. This idea is supported by findings that brain areas active in visual perception are also active in visual imagery (Ganis, Thompson, & Kosslyn, 2004; Kosslyn & Thompson, 2003; O'Craven & Kanwisher, 2000) and that eyeclosure significantly increases mental simulation (Caruso & Gino, 2011) and visual imagery (Wais et al., 2010). The second part of the hypothesis holds that visual imagery improves recall of visual information. This idea was proposed approximately 2,500 years ago, by a poet called Simonides of Ceos (Yates, 1966). Many centuries later, his ideas have been confirmed by experimental studies (Jonides, Kahn, & Rozin, 1975; Paivio, 1969, 1971), as well as by neurological evidence (Ishai, Ungerleider, & Haxby, 2000; Mechelli, Price, Friston, & Ishai, 2004; Wais et al., 2010). Thus, eyeclosure promotes visual imagery, which facilitates retrieval of visual information from long-term memory.

Although the difference between the two hypotheses is subtle, each involves a different prediction concerning what type of to-be-remembered information will be facilitated by eyeclosure. Whereas the cognitive load hypothesis predicts that eyeclosure will improve memory for both visual and auditory information, the modality-specific interference hypothesis predicts selective memory benefits for visual information only. Evidence to date testing these predictions has been mixed. Perfect et al. (2008) found support for the modality-specific interference hypothesis in their Experiment 2 but concluded that the majority of their evidence favoured the cognitive load hypothesis. Moreover, Perfect et al. (2011) found that eyeclosure reduced false memories particularly when participants were exposed to auditory distraction, suggesting that eyeclosure reduces competition for general, rather than modality-specific, resources. Vredeveldt et al. (2011), on the other hand, found clear support for the modality-specific interference hypothesis, with only limited support for the cognitive load hypothesis. They suggested that the diverging findings may be explained by considering the level of specificity of the to-be-remembered information.

Memory grain size is the level of specificity at which a person chooses to report information about a remembered event (Goldsmith, Koriat, & Pansky, 2005; Goldsmith, Koriat, & Weinberg-Eliezer, 2002). For instance, when asked how much you paid for the groceries bought yesterday, you could answer “$34.78” (fine-grain response) or “between 30 and 40 dollars” (coarse-grain response). Vredeveldt et al. (2011) found a modality-specific benefit of eyeclosure for fine-grain, but not coarse-grain, responses. One potential explanation for this finding is that visualisation enables witnesses to ‘see’ the precise answer in their mind’s eye (for example, the exact amount displayed on the bottom of the grocery bill). In addition to this modality-specific benefit of eyeclosure, Vredeveldt et al. also found a general benefit for coarse-grain responses in immediate recall. Thus, they hypothesised that eyeclosure involves two processes: It reduces general cognitive load (resulting in an overall increase in correct coarse-grain recall), and it facilitates visualisation (resulting in an increase in correct fine-grain recall of visual information). Just like the working memory model (Baddeley & Hitch, 1974), this idea accommodates the possibility of involvement of both general and modality-specific processes in the eyeclosure effect.

In previous studies, the eyes-closed condition was compared with a no-instruction control condition in which participants kept their eyes open. In an interview setting, this control condition may involve considerable interference from the presence of the interviewer (Glenberg et al., 1998; Wagstaff et al., 2008), consisting of both visual (e.g., looking at the interviewer’s face; cf. Posamentier & Abdi, 2003) and auditory (e.g., attending to the interviewer’s tone of voice; cf. Belin, Zatorre, Lafaille, Ahad, & Pike, 2000) components. To tease apart the effects of visual and auditory distractions, the present study compared an eyes-closed condition (no visual distraction, low auditory distraction) with three different eyes-open conditions. First, we included a control condition in which participants looked at a blank screen while listening to the interviewer’s questions (low visual distraction, low auditory distraction). If eyeclosure helps memory by reducing distraction from the environment, memory benefits should also be observed when participants look at a blank screen. However, if the effect is unique to the act of closing the eyes (perhaps because eyeclosure increases alpha activity; Wagstaff et al., 2004), memory benefits should not be observed when participants look at a blank screen. Second, we introduced a visual distraction condition in which participants viewed visual stimuli (high visual distraction, low auditory distraction). Third, we introduced an auditory distraction condition in which participants heard auditory stimuli while looking at a blank screen (low visual distraction, high auditory distraction). To avoid confounding sensory and semantic effects, we exposed participants to written and spoken Hebrew words. Hence, the stimuli were meaningless to the participants, yet similar, in terms of sensory properties, to potential distractions encountered in real life(see Jones, 1993; Salamé & Baddeley, 1987).

To summarise, this study was designed to explore the mechanisms behind the eyeclosure effect. We examined the effects of general and modality-specific distractions on eyewitness memory of a violent event. In line with the cognitive load hypothesis, we expected that participants exposed to any type of sensory distraction (i.e., visual or auditory) during the interview would perform worse on the memory test than would participants exposed to minimal distraction (i.e., blank screen or eyes closed). In line with the modality-specific interference hypothesis, we expected that visual distraction would selectively impair memory for visual details and that auditory distraction would specifically harm recall of auditory information. Consistent with Vredeveldt et al.’s (2011) findings, we hypothesised that the modality-specific effect would be observed predominantly for fine-grain recall.

Method

Participants

Eighty students from the University of York participated in the study for course credit or a small monetary reward (19 males and 61 females; mean age = 20.82 years, SD = 3.92). All participants had normal or corrected-to-normal vision and hearing, were native English speakers, and did not understand Hebrew. Participants were randomly assigned to one of four interview conditions, with 20 participants in each condition.

Materials

Participants watched an 8-min extract taken from a TV drama. The video shows a man who gets shot by a rifle. He is then taken into a house, and the wound is stitched up. After some talking, a physical fight breaks out between him and the man who stitched up the wound. Prior to the main experiment, 8 pilot participants watched the video and attempted to answer the original set of 24 questions. On the basis of their performance, we selected ten questions addressing uniquely visual aspects of the event (i.e., what was seen) and ten questions addressing uniquely auditory aspects (i.e., what was mentioned verbally). The questions were asked in the order in which the corresponding events appeared in the video clip (see the Appendix). None of the pilot participants took part in the experiment proper.

Participants who did not close their eyes were requested to look at a 17-in. monitor placed in front of them at an approximately 30-cm distance from their face. The screen was switched off in the blank screen and auditory distraction conditions. In the visual distraction condition, participants looked at 12 Hebrew words (in Hebrew script) gradually appearing and disappearing at random locations on the screen at a rate of 1 per second, looped throughout the interview. In the auditory distraction condition, participants listened to the same Hebrew words being spoken via speakers, at 55–60 dB SPL(A). Pilot work confirmed that the spoken words did not interfere with the ability to hear the interview questions.

Procedure

All participants were tested individually in a small laboratory. Participants were informed about the violent nature of the video clip via the announcement calling for participants and provided written consent. After watching the video clip, participants engaged in a word finder filler task for approximately 5 min. They then participated in the interview with 20 questions about the video (see the Appendix). One group of participants was instructed to look at the blank screen throughout the interview (control condition), while another group was instructed to keep their eyes closed (eyes-closed condition). A third group was told that they would see Hebrew words popping up on the screen during the interview. They were instructed to ignore the words but keep their eyes focussed on the screen (visual distraction condition). The final group was instructed to keep looking at the blank screen while they heard Hebrew words being spoken, which they were instructed to ignore (auditory distraction condition). Participants who failed to comply with the instructions at any point during the interview were reminded appropriately. All participants were specifically instructed to ask the interviewer to repeat a question if they could not hear it properly. They were asked to remember as much as they could, but not to guess: a ‘do not remember’ response was permissible. After answering the interview questions, participants completed a demographic information sheet. At the end of the experiment, participants were debriefed and thanked for their participation.

Data scoring

The first author scored the audio-taped interviews blind to interview condition. If participants indicated that they did not know the answer to the question, it was scored as an omission. Any answers provided were scored as correct or incorrect. We employed a relatively strict scoring procedure, in which a response was scored as incorrect if it contained any incorrect elements, even if part of the answer was accurate. To provide a more sensitive scoring procedure than those used in previous research (e.g., Perfect et al., 2008), all correct responses were scored for grain size. This enabled us to test the prediction that eyeclosure would have the largest effect on the type of information that benefits from visualisation (i.e., fine-grain visual information). A correct response was scored as fine grain if it contained all elements of a complete and accurate answer to the question and as coarse grain if the answer was vague or if it contained only part of the correct answer.Footnote 1 Examples of fine, coarse, and incorrect responses for each question can be found in the Appendix. Sixteen interviews (320 responses; 20% of the total sample) were randomly selected and scored independently by a second blind coder. Inter-rater reliability (for the decision to score a response as fine-grain correct, coarse-grain correct, incorrect, or omitted) was high, κ = .96, p < .001. Coding disagreements were rare and mainly involved responses that contained both accurate and inaccurate elements. The scores of the first author were retained for the main analysis.

Results

The present study was designed to investigate whether meaningless sensory distraction interferes with memory retrieval and whether it does so in a general or modality-specific manner. The cognitive load hypothesis and the modality-specific interference hypothesis are discussed in separate sections below. Because some of the variables were not normally distributed, non-parametric tests were also performed, with results identical to those reported below. To allow for direct comparisons between variables, we report only the parametric test results here. All interactions not reported below were non-significant. Figure 1 shows all types of responses to interview questions about visual (Fig. 1a) and auditory (Fig. 1b) details, in the four different interview conditions.

Fig. 1
figure 1

Fine- and coarse-grain correct, incorrect, and omitted responses to questions about a visual and b auditory details, by interview condition. Error bars indicate standard errors

Is eyeclosure special?

One of our research questions was whether eyeclosure helps memory simply by blocking out the environment, or whether there is something special about eyeclosure per se. Simple effects analyses revealed no significant differences between the control and eyes-closed conditions on any of the variables, suggesting that the effect is not unique to the physical act of closing the eyes. Therefore, the two conditions that were low in distraction were collapsed for all the planned comparisons reported below.

Cognitive load hypothesis

Correct recall

A 4 (interview condition: control, eyes closed, visual distraction, auditory distraction) × 2 (question modality: visual, auditory) mixed analysis of variance (ANOVA) was conducted on the total number of (fine- plus coarse-grain) correct responses. There was a significant main effect of interview condition, F(3, 76) = 6.64, p  <  .001, η 2 = .21 (see Fig. 1). Planned contrasts showed that participants in the low-distraction conditions gave significantly more correct responses than did participants in the high-distraction conditions, t(76) = 4.31, p < .001, η 2 = .20. In addition, there was a significant main effect of question modality, F(1, 76) = 7.41, p < .01, η 2 = .09. Overall, more correct responses were given to questions about visual aspects of the event than to questions about auditory aspects. The interaction between interview condition and question modality will be addressed in the section exploring the modality-specific interference hypothesis.

Grain size

Overall, participants gave significantly more fine-grain than coarse-grain correct responses, t(79) = 14.03, p < .001, η 2 = .81 (see Fig. 1). Separate 4 (interview condition: control, eyes closed, visual distraction, auditory distraction) × 2 (question modality: visual, auditory) ANOVAs were conducted for fine- and coarse-grain responses. As is illustrated in Fig. 1, the effect of interview condition was observed for fine-grain correct responses, F(3, 76) = 6.83, p < .001, η 2 = .22, but not for coarse-grain correct responses, F(3, 76) = 1.20, p  > .10, η 2 = .05. Planned contrasts for fine-grain recall showed that participants not exposed to sensory distraction gave significantly more correct fine-grain responses than did participants who were exposed to sensory distraction, t(76) = 4.31, p  <  .001, η 2 = .20. Furthermore, although both ANOVAs revealed a significant main effect of question modality, the observed effects were in the opposite directions for each type of recall. There were significantly more correct coarse-grain responses to questions about visual details than to questions about auditory details, F(1, 76) = 38.55, p < .001, η 2 = .33, but significantly more correct fine-grain responses to questions about auditory information than to questions about visual information, F(1, 76 ) = 13.21, p < .001, η 2 = .14. Thus, witnesses tended to give generic descriptions of visual aspects of the witnessed scene, but specific descriptions of auditory aspects.

Incorrect recall and omissions

A 4 (interview condition: control, eyes closed, visual distraction, auditory distraction) × 2 (question modality: visual, auditory) ANOVA on the number of incorrect responses revealed a significant main effect of interview condition, F(3, 76) = 4.47, p  <  .01, η 2 = .15. Planned contrasts showed that participants in the low-distraction conditions gave significantly fewer incorrect responses than did participants in the high-distraction conditions, t(76) = 3.34, p < .01, η 2 = .13 (see Fig. 1). We found no main effect of question modality (F < 1). Another 4 × 2 ANOVA on the number of omissions showed that participants left significantly more auditory than visual questions unanswered, F(1, 76) = 30.34, p < .001, η 2 = .13. The number of omissions was not significantly affected by interview condition, F(3, 76) = 1.06, p  >  .10, η 2 = .04.

Modality-specific interference hypothesis

Correct recall

Because we had the a priori prediction that visual and auditory distraction would selectively impair memory for aspects presented in the same modality, we examined whether there was an interaction between interview condition and question modality in these two conditions. We conducted separate 2 (type of distraction: visual, auditory) × 2 (question modality: visual, auditory) mixed ANOVAs on the total number of correct responses, the number of coarse-grain correct responses, and the number of fine-grain correct responses. The hypothesised interaction was not significant for the total number of correct responses, F(1, 38) = 1.89, p > .10, η 2 = .04, or for coarse-grain correct responses, F(1, 38) = 1.18, p > .10, η 2 = .02. For fine-grain recall, however, there was a significant interaction between type of distraction and question modality, F(1, 38) = 8.66, p  <  .01, η 2 = .16. Figure 1 shows that fine-grain correct recall of visual details was disrupted more by visual than by auditory distraction, whereas fine-grain correct recall of auditory details was impaired more by auditory than by visual distraction.

Incorrect recall and omissions

A 2 (type of distraction: visual, auditory) × 2 (question modality: visual, auditory) ANOVA on incorrect responses revealed a significant interaction between type of distraction and question modality, F(1, 38) = 7.40, p < .01, η 2 = .16. In line with the modality-specific interference hypothesis, visual distraction during the interview was associated with more false memories for visual than for auditory aspects of the event, and conversely, auditory distraction selectively increased false memories in the auditory domain. Another 2 × 2 ANOVA on the number of omissions revealed no significant interaction between type of distraction and modality, F(1, 38) = 3.33, p = .08, η 2 = .05.

Testimonial accuracy

To assess the quality of witness reports in different interview conditions, we calculated testimonial accuracy by dividing the number of (fine- plus coarse-grain) correct responses by the total number of correct and incorrect responses (cf. Smeets, Candel, & Merckelbach, 2004). Testimonial accuracy rates for questions about visual and auditory aspects in each interview condition are displayed in Table 1. A 4 (interview condition: control, eyes closed, visual distraction, auditory distraction) × 2 (question modality: visual, auditory) ANOVA on testimonial accuracy showed no main effect of modality (F < 1). There was, however, a main effect of interview condition, F(3, 76) = 5.04, p < .01, η 2 = .17. Planned contrasts showed that testimonial accuracy was significantly higher in the low-distraction conditions than in the high-distraction conditions, t(76) = 3.61, p < .001, η 2 = .22.

Table 1 Means (Ms) and standard deviations (SDs) for testimonial accuracy rates for questions about visual and auditory aspects of the event in different interview conditions

Furthermore, when the visual and auditory distraction conditions were analysed separately, a 2 (type of distraction: visual, auditory) × 2 (question modality: visual, auditory) ANOVA showed a significant interaction between type of distraction and question modality on testimonial accuracy rates, F(1, 38) = 6.60, p < .05, η 2 = .15. Table 1 shows that visual distraction selectively reduced the accuracy of visual reports, whereas auditory distraction interfered more with the accuracy of auditory than of visual reports. In sum, any type of distraction during the interview impaired testimonial accuracy, as compared with minimal or no distraction, and visual distraction harmed accuracy of recall of visual aspects in particular, whereas auditory distraction selectively impaired the quality of auditory reports.

Discussion

The present study provides evidence for both the cognitive load hypothesis and the modality-specific interference hypothesis. First of all, we found that any type of sensory distraction impaired fine-grain correct recall and increased false memories about the event, as compared with interview conditions with minimal distraction. Second, we found that visual distraction impaired recall of visual details more than recall of auditory details and that auditory distraction was particularly disruptive for recall of auditory details. Furthermore, in accordance with Vredeveldt et al.’s (2011)findings, we found that modality-specific interference affected fine- but not coarse-grain recall, supporting the idea that visual or auditory imagery enables witnesses to ‘see’ or ‘hear’ the precise details of the witnessed event. Unlike in Vredeveldt et al., however, the general interference effect in the present study was also observed for fine- rather than coarse-grain recall. Thus, participants in the low-interference conditions seemed to be better able to concentrate on the retrieval task, replacing less helpful coarse-grain responses and particularly unhelpful incorrect responses with more valuable fine-grain responses. All in all, memory for the basic information of a violent event (i.e., coarse-grain recall) seems to be robust, whereas memory for the specific details (i.e., fine-grain recall) is more easily disrupted by general and modality-specific interference from the environment.

The involvement of a combination of general and modality-specific processes is not unique to the eyeclosure effect. Baddeley and Hitch’s (1974) working memory model also accommodates both types of processes. In fact, it is plausible that retrieval from long-term memory requires working memory (Anderson, 1996; Moscovitch, 1994; Rosen & Engle, 1997). First, under conditions of high cognitive load, the central executive can allocate only limited resources to effortful retrieval from long-term memory (Moscovitch, 1994, 1995). Indeed, concurrent load during retrieval may reduce semantic recall performance by as much as 32% (Baddeley, Lewis, Eldridge, & Thomson, 1984; Rosen & Engle, 1997). Furthermore, numerous applied studies have shown that distractions from the environment, such as office and traffic noise, can significantly impair performance on real-world cognitive tasks relying on episodic long-term memory (Banbury & Berry, 1998, 2005; Banbury, Macken, Tremblay, & Jones, 2001; Hygge, Boman, & Enmarker, 2003; Hygge, Evans, & Bullinger, 2002). Thus, memory retrieval benefits from a reduction in cognitive load, which may be achieved by closing the eyes. This observation is in line with Perfect et al.’s (2011) finding that eyeclosure was particularly effective in reducing false memories when participants were under high cognitive load (caused by bursts of white noise).

Second, modality-specific processing has been observed in long-term memory (see also Barsalou, Simmons, Barbey, & Wilson, 2003; Barsalou & Wiemer-Hastings, 2005). Visual distractions may disrupt the workings of the visuospatial sketchpad, which is responsible for maintaining visual images retrieved from long-term memory (Baddeley, 1983). Similarly, auditory-verbal distractions may impair auditory-verbal imagery represented in the phonological loop (Baddeley & Logie, 1992). Consistent with this idea, Logie (1986) found that memory performance based on visual imagery was disrupted by looking at irrelevant visual displays. Baddeley and Andrade (2000) found that visual and auditory images retrieved from long-term memory were rated as significantly less vivid when participants were required to perform a concurrent task in the same modality as the retrieved image. Brooks (1967, 1968) found that memory for spatial relations and diagrams was selectively impaired when output had to be visually monitored, whereas retrieval of verbal information was selectively disrupted when output had to be spoken. Thus, retrieval of visual and auditory information from long-term memory seems to rely on modality-specific subsystems in working memory. When visualisation is disrupted, memory for visual information suffers, and when auralisation (cf. Kleiner, Dalenbäck, & Svensson, 1993) is disrupted, memory for auditory information suffers.

An additional variable of interest in the present study was whether looking at a blank screen would be just as effective as closing the eyes. We found no significant differences between the control and eyes-closed conditions, although eyeclosure seemed to be somewhat more effective in improving recall. As compared with the high-distraction conditions, eyeclosure increased the number of correct fine-grain responses by 32%, whereas looking at a blank screen resulted in a 21% increase. Closing the eyes caused an impressive 43% decrease in incorrect recall, whereas looking at a blank screen resulted in a marginally significant 23% decrease. Finally, eyeclosure increased testimonial accuracy rates by 12%, whereas looking at a blank screen increased accuracy by 7%. The differences may be due to the fact that closing the eyes blocks out all visual input from the environment more effectively than does looking at a blank screen (e.g., participants may have been distracted by movements in the periphery of their visual field). Nevertheless, the present data do not indicate that the eyeclosure effect is unique to the physical act of closing the eyes (cf. Wagstaff et al., 2004). From an applied perspective, this is an encouraging finding. Fisher and Geiselman (1992) observed that eyewitnesses are sometimes reluctant to close their eyes during the interview, and the present findings provide empirical support for their suggestion to “focus on a solid visual field, like a blank wall” instead (Fisher & Geiselman, 1992, p. 133). It should be noted, however, that the blank computer screen that participants looked at during the interview was also the screen on which the video had been presented earlier. Future research should investigate whether focussing on any blank space improves memory, to rule out context-specific effects (cf. Godden & Baddeley, 1975).

Of course, real eyewitnesses will never be forced to look at or listen to Hebrew words during a police interview. Although this type of distraction is not realistic, it enabled us to isolate the effects of purely sensory interference. The fact that we found these relatively simple, meaningless stimuli to interfere with memory of a violent event suggests that the more complex, semantically meaningful distractions present during real eyewitness interviews may disrupt memory retrieval even more. For instance, initial eyewitness interviews are sometimes conducted at the scene of the crime, rather than in quiet interview rooms (Gabbert, Hope, & Fisher, 2009). Future studies could investigate whether the benefits of eyeclosure are as prominent in such an animated setting (e.g., on a busy street) as in a relatively quiet setting. In addition, the social demands of the interview situation may have a considerable impact on memory performance. Wagstaff et al. (2008) found that the presence of another person in the interview room significantly impaired eyewitness memory, and Markson and Paterson (2009) found that the memory benefits of averting the gaze from the experimenter were due to a reduction in social, rather than cognitive, demands (but see Doherty-Sneddon & Phelps, 2005). Future work could examine the role of social factors in the eyeclosure effect—for instance,by comparing face-to-face eyewitness interviews with interviews across a live video link (cf. Davies, 1999; Doherty-Sneddon & Phelps, 2005). Finally, although the benefits of eyeclosure have now been observed across a number of violent (Vredeveldt, Baddeley, & Hitch, 2010, 2011) and non-violent events (Perfect et al., 2008; Wagstaff et al., 2004), the present study examined memory for only one event. Future work needs to establish whether the present findings replicate across different violent events and in more realistic settings. If the effect is found to be robust, instructing witnesses to close their eyes or look at a blank space could be a viable alternative to the complex cognitive interview procedure, especially when time is limited.