Materials
Memory tests not only provide a tool for assessing a person’s current level of knowledge, but are themselves effective learning strategies that can boost memory performance (for reviews, see Bäuml & Kliegl,
2024; Karpicke,
2017). Remarkably, even tests given before learning can enhance recall on a later test (e.g., Kornell et al.,
2009; Richland et al.,
2009; for a review, see Pan & Carpenter,
2023). For instance, Richland et al. (
2009) showed that having participants guess the answers to prequestions (e.g., ‘What is total color blindness caused by brain damage called?’) before they read a passage about color blindness containing the answers to the prequestions enhanced their memory performance when they again received these questions on a subsequent final test, relative to participants who studied the passage without receiving any initial questions. This pretesting effect is particularly striking since studies examining the phenomenon usually exclude correct guessing attempts made during initial pretesting from the data analysis, which means that the effects of failed guessing attempts are isolated. From an applied perspective, the pretesting effect appears highly relevant since it has been observed for various types of study materials, such as trivia questions, word pairs, videos, and prose passages, and has not only been found in laboratory-based studies, but also in educational settings (for reviews, see Chan et al.,
2018; Kornell & Vaughn,
2016).
A critical issue that arises for educational contexts is whether pretesting can be used to promote the acquisition of study material that is distributed over multiple segments such as, for instance, multiple book chapters. Indeed, as students approach the exam phase, they often have to prepare large chunks of information that are either related because they belong to a single subject or largely independent of one another because tests for different subjects have to be prepared in parallel. For both types of situations, students who want to use pretesting as a learning tool may not only ask themselves prequestions about the material to be studied at the beginning of the learning period but rather throughout the whole process, e.g., prior to reading a new book chapter. Thus far, little is known about whether such interpolated pretesting can boost long-term retention and whether the effectiveness of such a technique depends on whether the segments to be studied are thematically related or distinct.
The findings from one earlier study by Pan et al. (
2020) suggest that interpolated pretesting may boost later retention of related study material. These researchers conducted two experiments, in each of which participants watched a 26-minute video of a statistics online lecture that was partitioned into four segments of similar length. Before participants watched each segment, they either solved algebra problems or had to answer multiple-choice questions about the segment. Both Experiments 1 and 2 showed that interpolated pretesting led to higher recall performance and reduced mind wandering on a subsequent final test on all four segments than interpolated solving of algebra problems. The results of Experiment 2 further indicated that the benefits of such interpolated pretesting were similar to the benefits of (more typical) pretesting administered entirely prior to segment presentation. The findings thus suggest that interpolated pretesting can keep participants engaged with a learning task that consists of several related segments.
While these findings sound promising, it is important to examine their generalizability, especially since several aspects of the Pan et al. (
2020) study diverge from typical pretesting-effect procedures. In particular, the researchers (i) used a topic as study material that their participants (i.e., undergraduate psychology students) may already have been familiar with to some degree (i.e., signal detection theory) and (ii) applied multiple-choice questions during the initial pretest. Both the topic and the type of initial test may have led to the relatively high percentage of correct answers on the pretest (i.e., 51% in Experiment 1 and 48% in Experiment 2). Unlike in many other pretesting-effect studies which removed questions for which correct answers were provided on the initial pretest from further analysis, the researchers also did not isolate the effects of erroneous guesses on later recall performance. While the decision not to distinguish between correctly and incorrectly answered questions on the initial pretest seems fine from a purely applied perspective, it is important to consider in the next step for a more typical version of the task that excludes all initial correct answers whether interpolated pretesting can still induce a continuous pretesting effect across all study segments.
A related issue to consider is the potential role of the success rate during initial pretesting for the effects of interpolated testing: the possibility arises that pretest questions which lead to a lower success rate than in the Pan et al. (
2020) study – most studies of the pretesting effect in fact report success rates of under 10% (e.g., Grimaldi & Karpicke,
2012; Kliegl et al.,
2024a; Kornell et al.,
2009) – could reduce participants’ engagement over multiple study segments in the pretesting condition and thus reduce the size of the pretesting effect from earlier to later study segments. Indeed, it seems plausible that when participants realize that they answered most or all pretest questions incorrectly as soon as they read the first study segment, their effort to come up with adequate guesses on subsequent interpolated pretest cycles may diminish. The current study therefore examined whether interpolated pretesting can still boost later retention when the pretest questions are so difficult that mostly errors are produced on the initial pretest and when only questions for which errors were produced on the initial pretest are included into the further analyses.
The present study
The goal of the present study was to examine whether pretests that are interspersed between single study segments can boost later recall performance both when the segments consist of related and when they consist of distinct prose passages. In Experiment 1, participants were shown a text about Big Bang theory that was divided into four study segments. Participants were either asked to study each segment for a later test (study-only condition) or, prior to studying each segment, to answer seven questions about the immediately following segment (pretest condition; e.g.,” How many years ago did the Big Bang set the expansion of the universe in motion?). Unlike in the Pan et al. (
2020) study, no answer options were shown. Study duration in the pretest condition was one third shorter than in the study-only condition (2 min 20 s vs. 3 min 30 s) to account for the duration of the pretest, which took 1 min 10 s (for a similar proceeding, see Richland et al.,
2009). Twenty-four hours after the acquisition phase, participants engaged in a final test on all four segments. This test included in random order all 28 initial pretest questions, i.e., seven questions from each of the four segments. Percentage of correctly answered final-test questions and number of overt errors (i.e., intrusions) produced on the final test were analyzed.
Experiment 1 was intended as a conceptual replication of Pan et al. (
2020), examining whether their finding that interpolated pretesting can lead to an overall boost in final-test performance still arises when a more difficult free-answer format instead of a multiple-choice answer format is used. Procedural details of Experiment 2 were mostly identical to Experiment 1, but with the critical difference that the four study segments consisted of texts that were unrelated to each other. The goal of Experiment 2 thus was to investigate whether interpolated pretesting can still benefit learning and memory of multiple study segments when they belong to different topics.