Misconceived knowledge (e.g., misconceptions, inaccurate beliefs, misinformation, myths) is difficult to revise (Chi, Slotta, & de Leeuw, 1994; Ecker, Lewandowsky, Cheung, & Maybery, 2015; Özdemir & Clark, 2007; Rapp & Braasch, 2014; Turcotte, 2012; Vosniadou, 1994). Not only is the revision of misconceived knowledge difficult, but it can also interfere with an individual’s ability to acquire new knowledge (Kendeou & O’Brien, 2014; Shtulman & Varcarcel, 2012). Researchers have long been interested in determining the conditions that can effectively address the challenges of revising misconceived knowledge. In this context, the use of texts has been identified as a powerful tool in the knowledge revision process (Sinatra & Broughton, 2011). One text type that has been shown to be successful at helping individuals revise misconceived knowledge is the refutation text (Alvermann & Hague, 1989; Alvermann & Hynd, 1989; Braasch, Goldman, & Wiley, 2013; Frède, 2008; Guzzetti, Snyder, Glass, & Gamas, 1993; Hynd, Alvermann, & Qian, 1997; Mason & Gava, 2007; Mason, Gava, & Boldrin, 2008; Tippett, 2010). Refutation texts explicitly acknowledge commonly held incorrect conceptions or beliefs about a topic, directly refute them, and provide a more satisfactory explanation (Hynd, 2001).

The power of refutation texts has been established in the conceptual change literature, specifically for the revision of knowledge that is misconceived at the individual belief level (Chi, 2008). Chi termed this type of revision belief revision. In recent years, a number of researchers have systematically examined the processes by which refutation texts influence belief revision, using the reading time, eyetracking, and think-aloud methodologies (Ariasi & Mason, 2011, 2014; Broughton, Sinatra, & Reynolds, 2010; Kendeou & van den Broek, 2007; McCrudden & Kendeou, 2014; Trevors & Muis, 2015; van den Broek & Kendeou, 2008). Taken together, these studies have provided a thorough description of the different processes that refutation texts evoke over the course of reading.

Drawing on this work, Kendeou and O’Brien (2014) proposed the knowledge revision components framework (KReC), which outlines basic comprehension processes and text factors that can be accentuated to increase the potential for knowledge revision during reading. KReC identifies five principles to account for the knowledge revision process; the first two are assumptions (encoding and passive activation), and the remaining three are conditions necessary for the revision process (coactivation, integration, and competing activation). According to the encoding principle, once information has been encoded into long-term memory, it cannot be “deleted” (e.g., Gillund & Shiffrin, 1984; Hintzman, 1986; Kintsch, 1988; Ratcliff, 1978; Ratcliff & McKoon, 1988). Thus, there is always the potential that it can be reactivated and influence comprehension, decision making, and learning. According to the passive-activation principle, information in long-term memory is activated via passive memory processes (Myers & O’Brien, 1998; O’Brien & Myers, 1999). Because memory activation is both passive and unrestricted, any information that is related to the current contents of working memory has the potential to become activated, independent of whether it facilitates or interferes with learning. However, it is important to note that the signal that emanates from working memory is not evenly distributed across all currently active elements. The signal varies in intensity on the basis of the reader’s focus of attention, with the signal being strongest from those elements in focus (see Myers & O’Brien, 1998; O’Brien & Myers 1999; O’Brien & Cook, 2016). Activated elements, in turn, signal related elements in memory. Activation builds in this manner, and when the process stabilizes, the most active elements (i.e., those that resonate the most) enter into working memory. In this way, elements that are related to the reader’s focus of attention have a greater likelihood of entering working memory. These two principles taken together raise an important question for knowledge revision: If misconceived knowledge cannot be erased and always has the potential to be reactivated and to influence subsequent comprehension and learning, how can knowledge revision be accomplished?

The remaining three principles of KReC are designed to answer this question by specifying the conditions that will eliminate the reactivation, and thus the potential influence, of previously encoded misconceived knowledge. KReC operationalizes knowledge revision as the elimination of the activation of previously acquired misconceived knowledge. According to the coactivation principle, coactivation is a necessary condition for knowledge revision because it is the only way that new information can come in contact with previously encoded misconceived knowledge (Kendeou, Muis, & Fulton, 2011; Kendeou & van den Broek, 2007; van den Broek & Kendeou, 2008). According to the integration principle, knowledge revision can only occur when the newly encoded information is integrated with this previously encoded misconceived knowledge. Any time new information is integrated with previously acquired information, the long-term memory representation of that information is revised to take into account the new information (e.g., Kendeou, Smith, & O’Brien, 2013; Kendeou, Walsh, Smith, & O’Brien, 2014; O’Brien, Cook, & Guéraud, 2010; O’Brien, Cook, & Peracchi, 2004; O’Brien, Rizzella, Albrecht, & Halleran, 1998). According to the competing-activation principle, as the amount of newly encoded information is increased, it will begin to dominate the integrated network of information, draw increasing amounts of activation to itself, and at the same time, away from the previously acquired misconceived information. As activation is drawn away from the misconceived information, the amount of interference from that information decreases accordingly.

In a series of studies, Kendeou and colleagues (Kendeou et al., 2013; Kendeou et al., 2014) provided evidence for the knowledge revision principles outlined in KReC by examining the influence of specific text characteristics. In particular, Kendeou et al. (2013) demonstrated that a systematic increase of the interconnectedness of the newly encoded information resulted in a systematic decrease in the activation of the misconceived information. Furthermore, Kendeou et al. (2014) demonstrated that newly encoded information was most effective at eliminating the activation and interference from misconceived information when it provided causal explanations. Causal information inherently provides a rich set of interconnections (O’Brien & Myers, 1987; Trabasso & Suh, 1993; Trabasso & van den Broek, 1985), and therefore provides an efficient and effective means of creating a network that will compete for activation and draw sufficient activation so that any interference from the misconceived information is effectively reduced and/or eliminated.

The goal of the present set of experiments was to extend the findings of Kendeou and colleagues (2013; Kendeou et al., 2014) by identifying additional factors that influence knowledge revision. One such factor is the credibility of the source in the text providing the newly encoded information. Within the text and discourse literature, source credibility, or how believable a source of information is perceived to be (Alison et al., 2012), is usually evaluated in terms of a source’s trustworthiness and expertise (Lombardi, Seyranian, & Sinatra, 2014; Pornpitakpan, 2004; Sparks & Rapp, 2011). Trustworthiness is a measure of the source’s honesty, and expertise is a measure of the source’s knowledge and capacity to provide accurate information. Source credibility has the potential to influence knowledge revision during reading because perceived credibility influences whether and what information individuals use when they are confronted with discrepant information (Braasch, Rouet, Vibert, & Britt, 2012). Specifically, Braasch et al. (2012) demonstrated that when readers are confronted with discrepancies or breaks in coherence they attend to more and evaluate source information during encoding.

Furthermore, the focus on source credibility is also motivated by its recent documented influences on comprehension when reading a single text (Appel & Mara, 2013; Braasch et al., 2012; Guillory & Geraci, 2013; Sparks & Rapp, 2011; Thomm & Bromme, 2011) and multiple texts (Anmarkrud, Bråten, & Strømsø, 2014; Bråten, Strømsø, & Salmérson, 2011; Bromme, Scharrer, Stadtler, Hömberg, & Torspecken, 2015; Kobayashi, 2014; Stadtler, Scharrer, Brummernhenrich, & Bromme, 2013; Strømsø, Bråten, & Britt, 2009, 2010; Strømsø, Bråten, Britt, & Ferguson, 2013). Credibility in this context has been examined at two different “grain” levels: at the level of the text or document as a whole, and at the level of the character or narrator within a text or document.

The evaluation of source credibility at the grain level of the text or document as a whole, also termed sourcing, has a central role in disciplinary literacy practices (Hynd-Shanahan, 2013; Moje, 2007; Shanahan & Shanahan, 2012). For example, historians must consider the implications of an author’s credibility when interpreting a historical text (Wineburg, 1991), whereas mathematicians focus more on the content of text than on its author (Shanahan & Shanahan, 2008; Shanahan, Shanahan, & Misischia, 2011). Although the emphasis on author or document credibility may vary across disciplines, being able to distinguish between credible and noncredible sources has important implications for text comprehension, especially when readers choose texts to read in order to learn about a new subject. For example, Strømsø, Bråten, and Britt (2010) observed that individuals who attended to source information when presented with texts on climate change had better text comprehension than those who did not attend to source information. Most importantly, several studies have reported that attending to source information is not a routine process for readers, particularly for novices in a domain (Britt & Aglinksi, 2002; Goldman, Braasch, Wiley, Graesser, & Brodowinska, 2012; Kim & Millis, 2006; Kobayashi, 2014).

Particularly relevant to the present work is the evaluation of source credibility at the grain level of the character or the narrator within a text. Work in this area has documented the differential impacts of source credibility online (i.e., during reading) and offline (i.e., after reading). With respect to online effects, Sparks and Rapp (2011) provided evidence that readers rely on the credibility of the source providing information in a text only when they are explicitly instructed to both attend to and use that information. Specifically, in four separate experiments, Sparks and Rapp incrementally increased the prompts given to participants to attend to and take into account the credibility of an informant. They found that the effect of the informant’s credibility became apparent only when participants were explicitly instructed to attend to the credibility of the informant, and that it was strongest when participants were required to use that information to make an assessment of the information presented in the text. With respect to offline effects, Appel and Mara (2013) provided evidence that a character’s credibility in a narrative has a direct impact on the readers’ intended behaviors. Appel and Mara had participants read a narrative designed to educate them about fuel-efficient driving. Source credibility was manipulated by varying trustworthiness, so that in the trustworthy condition participants were informed that the source was a well-respected expert, whereas in the untrustworthy condition, participants were informed that the source did not practice the fuel-efficient driving practices his organization advocated for. A third condition was included as a control. The findings showed that the story character’s credibility influenced participants’ intentions to engage in story-consistent behaviors. Specifically, they showed that participants in the trustworthy condition were more likely to report that they would engage in fuel-efficient driving in the future than were the participants in the untrustworthy and control conditions.

The findings from these two studies suggest that source credibility, at the level of a character or narrator within a text, has the potential to influence both the processes and the outcomes of reading comprehension. The important question in the context of the present study is the extent to which the credibility of a source in a refutation text also influences knowledge revision processes and outcomes. Drawing on the findings of Sparks and Rapp (2011), we hypothesized that when the credibility of a source providing the newly encoded information is not made salient, the knowledge revision process, as conceptualized in the context of KReC (Kendeou & O’Brien, 2014), would unfold independent of the source (Exp. 2). That is, reading refutation texts that included explanations supporting the newly encoded information that were provided by either high- or low-credibility sources would result in the same construction of highly interconnected networks depicting those explanations. When competing for activation with the misconceived knowledge, these highly interconnected networks would draw sufficient activation so that any interference from the misconceived information would be reduced and/or eliminated. This should be evident both online and when subsequently tested offline.

However, when the credibility of a source providing the newly encoded information was made salient, the knowledge revision process would be differentially influenced by the source (Exp. 3). Specifically, when the reader’s attention was purposefully directed to the source that provided the explanation and the source was of high credibility, we hypothesized that reading would still result in the construction of a highly interconnected network depicting the explanation, which would compete with the misconceived knowledge and reduce and/or eliminate its activation. This is because the evaluation or validation of source information during encoding would be readily completed (Cook & O’Brien, 2014; Lombardi, Danielson, & Young, 2016; Richter, 2015). Conversely, when the source providing the explanation was of low credibility, we hypothesized that the construction of the highly interconnected network (which would compete for activation with the misconceived information) would be disrupted. This disruption might be due to decreased allocation of attention to the explanation, with a corresponding increase in allocation of attention to the source information and/or to additional validation processes (Braasch et al., 2012). The result would be weaker and likely incomplete encoding of the explanation that was provided by a low-credible source. In turn, the weaker and/or less-interconnected network would be less effective when competing for activation against the misconceived knowledge, and as a result would not reduce or eliminate its activation. This should also be evident both online and when subsequently tested offline.

As we indicated earlier, directing the reader’s attention to evaluate and use source information during encoding would influence what information became available through the bottom-up or memory-based processes at play, as conceptualized in the context of the KReC framework. In alignment with the passive-activation principle (Myers & O’Brien, 1998; O’Brien & Myers 1999) of the framework, the strength of the signal emanating from active memory is always influenced more strongly by information that is within the reader’s focus (i.e., attention). At the same time, specific task instructions that made the source more salient or brought it into the reader’s focus would also influence what information was activated and became available to the reader (see also the work on “task-oriented reading”: Vidal-Abarca, Mañá, & Gil, 2010; Vidal-Abarca, Salmerón, & Mañá, 2011). In this context, task-relevant information has a higher likelihood of being activated than task-irrelevant information (McCrudden, Magliano, & Schraw, 2010). Thus, we propose that in the KReC framework attention allocation is the gateway by which top-down or strategic processes can, and do, exert an influence on bottom-up or memory-based processes (see also Kendeou & O’Brien, in press).

To summarize, in the present study a series of three experiments were designed to provide some clarity with respect to the influence of in-text source credibility on knowledge revision processes and outcomes during reading. In Experiment 1, we established the utility of the present set of refutation texts in influencing the knowledge revision process (online) and its outcome (offline) when compared to control texts. In Experiment 2, we examined the influence of source credibility under normal reading conditions by varying the credibility of the source that provides the newly encoded information (high vs. low credibility). In Experiment 3, we examined the influence of direct instructions that made the credibility of the source providing the newly encoded information more salient.

Experiment 1

The goal of Experiment 1 was to establish the utility of a set of refutation texts in influencing knowledge revision, when compared to nonrefutation control texts. The design and procedure employed in this experiment were similar to those used in previous studies (Kendeou et al., 2014). Participants were asked to read a series of short stories that followed a narrative-informational form (Duke, 2000). All of the texts followed a standard structure. The refutation texts included an introduction, an elaboration section that refuted and explained the misconception, a filler section that continued the storyline, a correct-outcome sentence that stated the correct belief, a spillover sentence, and a closing section. The refutation and the explanation were always provided by a high-credible source (e.g., a zoology professor refuting the misconception that ostriches bury their heads in the sand). The nonrefutation control texts used the same structure; however, the elaboration section neither stated nor refuted the misconception, but provided information that only functioned to continue the story. Consistent with Kendeou et al. (2014), if the refutation texts supported knowledge revision by reducing the activation of the misconception during reading, then reading times of the correct-outcome sentence following the refutation-plus-explanation elaboration should be faster than reading times of the correct-outcome sentence following the nonrefutation control elaboration. Furthermore, to the extent that the refutation texts were sufficient to support knowledge revision, then scores on a test that assessed revision of the misconceptions after reading should be higher in the refutation than in the control condition.

Method

Participants

Forty-four undergraduate students from Minnesota State University–Mankato participated in the present study. The participants received partial course credit for their involvement in this study.

Materials

Text development

To identify misconceptions prevalent in a university student population, 27 graduate and undergraduate participants from the University of Minnesota responded to a 69-item true/false questionnaire. Thirty-four of the items dealt with common misconceptions spanning several domains. The remaining items were difficult general knowledge questions.

All of the general knowledge questions were answered correctly at rates greater than chance. Of the commonly held misconceptions, 19 were held by at least 60% of the participants. Two of these misconceptions (“Psychiatric labels cause harm by stigmatizing people” and “Adult children of alcoholics display a distinctive profile of symptoms”) were deemed potentially controversial and were not included in this study. These items were replaced with two items that had been identified as common misconceptions in previous experiments (Kendeou et al., 2014; identified in bold in Table 1). Following the same logic, a final item was also added from Kendeou et al.’s (2014) earlier work, in order to have an even number of texts.

Table 1 Misconception identification

A second norming procedure was used to identify sources of information that could readily be identified as being of either high or low credibility. Twenty-five undergraduate participants from the University of Minnesota responded to a 33-item questionnaire that included a series of information sources (i.e., a professor in her/his area of expertise, a physician, a teacher, a documentary, a scientific article, a stranger, a child, a magazine article, a TV sitcom, a celebrity, and a blog) and were asked to rate each source on a credibility scale ranging from 1 Not credible to 10 Very credible. The overall mean rating per source was used to identify high-/low-credibility pairs to be used in the credibility manipulation in Experiments 2 and 3. Only the highest- and lowest-ranked information sources were selected to form the matched credibility pairs. The pairs were as follows: professor versus celebrity, professor versus stranger, physician versus stranger, teacher versus child, physician versus child, textbook versus blog, peer-reviewed journal article versus magazine, and documentary versus TV sitcom.

To ensure that the identified paired sources would be perceived as intended, we conducted planned mean comparisons on the participants’ credibility ratings. Participants reported that a professor in her/his area of expertise was more credible than a celebrity, t(24) = 16.33, p < .001, d = 3.50; a physician also was rated as more credible than a stranger, t(24) = 12.68, p < .001, d = 2.37; and a teacher was rated as more credible than a child, t(24) = 8.82, p < .001, d = 1.40. Participants also rated textbooks as more credible than blogs, t(24) = 14.12, p < .001, d = 2.14; peer-reviewed journal articles as more credible than magazine articles, t(24) = 11.06, p < .001, d = 2.32; and documentaries as more credible than sitcoms, t(24) = 9.24, p < .001, d = 2.65.

Texts

The texts consisted of 20 narrative-informational texts, each addressing one incorrect belief. Each text included an introduction, an elaboration section (within-subjects manipulation: refutation-plus-explanation or a nonrefutation control), a filler section, a correct-outcome sentence, a spillover sentence, and a closing section (see Appendix A for an example text).

All passages began with an introduction (seven sentences; 100 words), which served to establish the storyline. This was followed by one of two elaboration conditions: refutation-plus-explanation or nonrefutation control. The refutation-plus-explanation elaboration consisted of a refutation (two sentences; 33 words) that explicitly stated and refuted the target incorrect belief, followed by an explanation (six sentences; 100 words) that supported the correct belief. The refutation and explanation were always provided by one of the sources identified in the norming procedure as highly credible (e.g., a professor). The nonrefutation control elaboration (eight sentences; 133 words) introduced the same highly credible source, but only progressed the story, making no mention of either the incorrect or the correct belief. In both conditions, the elaboration was followed by a filler section (four sentences; 60 words) that carried the story forward without mention of the incorrect belief or refutation. After the filler section, a correct-outcome sentence was presented that stated the correct belief. Reading times were recorded on this sentence. A spillover sentence was presented immediately following the correct-outcome sentence. The correct-outcome sentences and spillover sentences were 40–43 characters long (correct-outcome sentence, M = 41.8; spillover sentence, M = 42.1). Finally, all passages concluded with a closing section (two sentences; 30 words) that wrapped up the storyline. In addition to the 20 experimental passages, ten filler narrative passages of similar length were included among the materials. Each passage ended with a comprehension question to ensure that participants were carefully reading each passage. The comprehension questions required equal numbers of “yes” and “no” answers and did not address information concerning the incorrect belief.

Two material sets were constructed. Each set contained ten refutation texts, ten nonrefutation control texts, and ten filler texts. Across the two sets, each experimental passage occurred once in each of the two conditions.

Posttest

The test included 20 items, targeting each of the identified incorrect beliefs, and followed a two-tier approach. The first tier was a typical true/false question, whereas the second tier required participants to provide an explanation for their response. Participants were awarded one point for a correct true/false answer, and zero points for an incorrect answer. Participants were awarded two points for a correct explanation, and zero points for an incorrect explanation; no partial marks were awarded. It was possible for participants to answer the true/false question incorrectly, yet still provide a correct explanation, and therefore participants could score two points on a given item. Thus, the possible range of scores was 0–3 on each question. Following previous work (Kendeou, Braasch, & Bråten, 2016), the explanation items were awarded more points than the true/false items. This scoring accounts for the importance of being able to produce an accurate explanation as evidence for learning (Bråten & Strømsø, 2010; Cerdán & Vidal-Abarca, 2008; Chinn, Buckland, & Samarapungavan, 2011; Lombrozo, 2011). The maximum possible score on the posttest was 30 in each of the two conditions. The reliability of the scores on the test was low, α = .61, but acceptable (George & Mallery, 2003).

Procedure

Participants were welcomed to the lab and informed that they would be reading some texts on a computer as part of a study on reading comprehension. They were instructed to rest their preferred hand on the line-advance key (space bar). Each trial began with the word “READY” in the center of the screen. When participants were ready to read a passage, they pressed the line-advance key. Each press of the key erased the current line of text (approximately seven words) and presented the next line of text. The reading time was measured as the time between keypresses. Participants were instructed to read at a normal and comfortable reading rate. Following the last line of each passage, the cue QUESTIONS appeared in the center of the screen for 2,000 ms. This was followed by a comprehension question to which participants responded by pressing either the “yes” or the “no” key. On the trials in which participants made an error, the word ERROR appeared in the middle of the screen for 750 ms. Before beginning the experimental passages, participants read two practice passages to ensure that they were familiarized with and understood the procedure.

Upon completion of the reading task, participants completed the 20-item posttest. Participants were asked to read each question and answer it to the best of their knowledge and understanding. Following that, participants completed a short demographic form and were debriefed and thanked for their participation in the study.

Results and discussion

In this experiment, and in all subsequent experiments, reading times that were greater than 2.5 SDs from the mean were discarded. Across all experiments, this resulted in the loss of less than 3% of the data. Also in all experiments reported, F 1 refers to tests against an error term based on participant variability, and F 2 refers to tests against an error term based on item variability. All analyses reported are significant at the .05 alpha level unless otherwise indicated.

To the extent that reading refutation texts influenced knowledge revision, reading times for the correct-outcome sentence in the refutation text condition should be faster than for those in the nonrefutation control text condition. Faster reading times would indicate that the correct-outcome sentence was easily integrated into the mental representation of the text. The results of a repeated measures analysis of variance (ANOVA) showed that the correct-outcome sentences were read significantly faster in the refutation (M = 1,874, SD = 354) than in the nonrefutation (M = 2,256, SD = 461) condition, F 1(1, 42) = 81.84, p < .001, η p 2 = .66Footnote 1; F 2(1, 18) = 33.44, p < .001, η p 2 = .65.

A repeated measures ANOVA was also conducted on the posttest scores. The analysis showed that performance on the items targeted in the refutation condition (M = 19.00, SD = 4.52) was significantly higher than performance on the items targeted in the nonrefutation control condition (M = 10.02, SD = 3.40), F 1(1, 42) = 212.64, p < .001, η p 2 = .84; F 2(1, 18) = 77.96, p < .001, η p 2 = .81. The posttest scores were further broken down into the true/false and explanation components. The analysis of these separate scores was conducted to identify whether the differences in performance could be attributed solely to differences in the explanation component, since the explanation included additional information in the refutation but not in the control condition. The analysis ruled out this possibility. Specifically, we observed significant differences between the refutation (true/false: M = 8.55, SD = 0.19; explanation: M = 10.32, SD = 0.56) and nonrefutation (true/false: M = 6.93, SD = 0.27; explanation: M = 3.00, SD = 0.33) conditions for both the true/false scores, F 1(1, 42) = 69.42, p < .001, η p 2 = .62; F 2(1, 18) = 10.71, p < .01, η p 2 = .37, and the explanation scores, F 1(1, 42) = 164.11, p < .001, η p 2 = .80; F 2(1, 18) = 45.45, p < .001, η p 2 = .72. These results demonstrate that differences in the posttest scores were not driven exclusively by higher scores on the explanation portion of the posttest. The correct outcome was presented in both the refutation and nonrefutation texts, yet when participants read the refutation texts, they used this correct information to respond correctly to the true/false items at higher rates than they did when reading the nonrefutation texts.

The reading time and posttest findings of Experiment 1 taken together are consistent with those obtained in previous research following the same paradigm (Kendeou et al., 2014), and they establish the utility of this set of refutation texts in facilitating knowledge revision during reading.

Experiment 2

The goal of Experiment 2 was to explore the extent to which source credibility influences the knowledge revision process when reading refutation texts. To address this question, participants were asked to read refutation texts in which the credibility of the source of the refutation and explanation within the text varied. The materials were the same as those used in Experiment 1, with one modification: Only the texts from the refutation condition were used, half including high-credibility sources and half including low-credibility sources. Following the revision mechanism outlined in KReC (Kendeou & O’Brien, 2014), we hypothesized that the reading of a refutation text in either condition (high vs. low credibility) would result in the construction of a highly interconnected network supporting the correct belief. Whether the newly encoded information was provided by a high- or a low-credibility source should not influence the construction of the interconnected network, because under normal reading conditions readers do not attend to, or use, credibility information during reading (Sparks & Rapp, 2011). Therefore, the supporting network of information would compete for and draw sufficient activation that any interference from the misconceived knowledge would be reduced and/or eliminated, independent of the source. This should be evident both online and offline. Specifically, the reading times of the correct-outcome sentence in the high-credibility condition should be no different than the reading times of the correct-outcome sentence in the low-credibility condition. Furthermore, the posttest scores in the high-credibility condition should be no different than those in the low-credibility condition.

Method

Participants

Participants were 36 University of Minnesota undergraduates. The average age of the participants was 19.5 (SD = 1.21), and 44% of them were female. Participants were awarded partial course credit for their participation in the study.

Materials

Texts

The same materials were used as in Experiment 1, with the following modifications. Only the refutation texts were used. The credibility of the source of the refutation and explanation was manipulated to create a two-level, within-subjects factor: high or low credibility. In the high-credibility condition, the refutation-plus-explanation was provided by a high-credibility source, whereas in the low-credibility condition, the same information was provided by a low-credibility source (e.g., professor vs. celebrity). The credibility pairings were established on the basis of the results of the norming procedure reported in Experiment 1.

Two material sets were constructed. Each set contained ten texts in each of the two experimental conditions and ten filler texts. Across the two sets, each experimental passage occurred once in each of the two conditions. Participants were assigned to either Set 1 or Set 2 and received all 20 experimental texts, half in the high-credibility condition, and half in the low-credibility condition.

Posttest

The same posttest was used as in Experiment 1. The reliability of the scores on the test was high, α = .89.

Procedure

The procedure was the same as in Experiment 1.

Results and discussion

To the extent that under normal reading conditions source credibility does not influence knowledge revision, the reading times of the correct-outcome sentence in the high-credibility condition should be as fast as those in the low-credibility condition, suggesting that the correct-outcome sentence was easily integrated into the mental representation of the text independent of source credibility. The results of a repeated measures ANOVA showed no significant difference between the high (M = 1,661, SD = 398) and low (M = 1,663, SD = 366) credibility conditions, F 1(1, 34) = 0.05, p = .83; F 2(1, 18) = 0.65 (see Table 2). It is important to note that the mean reading times on the correct-outcome sentences obtained in this experiment were comparable to those obtained in the previous experiments that had established the effectiveness of refutation texts using this paradigm (Kendeou et al., 2013; Kendeou et al., 2014).

Table 2 Mean reading times (in milliseconds) for correct outcome sentences in Experiments 2 and 3

A repeated measures ANOVA was also conducted on the posttest scores. Consistent with the reading time analysis, we found no significant difference between the high (M = 19.42, SD = 5.79) and low (M = 19.64, SD = 6.50) credibility conditions, F 1(1, 34) = 0.17, p = .69; F 2(1, 18) = 0.17, p = .69 (see Table 3). Considering that the maximum possible score in a condition was 30, the participants in both conditions achieved relatively high learning scores, providing further evidence that the participants in both conditions engaged in knowledge revision. The posttest scores were further broken down into the true/false and explanation components. The analysis of these separate scores was conducted to identify whether the differences in performance could be attributed solely to differences in the explanation component. The analysis showed no significant differences between the high-credibility (true/false: M = 8.39, SD = 0.25; explanation: M = 10.93, SD = 0.73) and low-credibility (true/false: M = 7.94, SD = 0.32; explanation: M = 11.64, SD = 0.83) conditions for the true/false scores, F 1(1, 34) = 2.58, p = .12; F 2(1, 18) = 0.33, p = .57, or the explanation scores, F 1(1, 34) = 1.89, p = .18; F 2(1, 18) = 0.71, p = .41 (see Table 3).

Table 3 Mean posttest scores in Experiments 2 and 3

The reading time and posttest findings, taken together, suggest that under normal reading conditions—that is, when the source is not made salient—source credibility does not influence knowledge revision during the reading of refutation texts. This set of findings is consistent with previous research that had examined source credibility in the context of narrative texts, and specifically the extent to which a story character’s credibility influences comprehension (Sparks & Rapp, 2011). Recall that Sparks and Rapp showed that source credibility influenced comprehension only when participants were directed toward credibility-related information and instructed to use this information. Therefore, a logical next step was to manipulate the task demands, to examine the extent to which specific instructions to consider the credibility of the source of the refutation would influence the knowledge revision process.

Experiment 3

The purpose of Experiment 3 was to determine the effects of source credibility on knowledge revision when source credibility was made salient. Specifically, when the reader’s attention allocation was directed to the source that provided the explanation and the source was of high credibility, we hypothesized that reading would still result in the construction of a highly interconnected network depicting this explanation, which would compete with the misconceived knowledge and reduce and/or eliminate its activation. However, when the source providing the explanation was of low credibility, we hypothesized that the construction of the highly interconnected network (which would compete for activation with the misconceived information) would be disrupted. This disruption should occur because of increased attention allocation on the source of the explanation. This, in turn, should result in weaker and/or incomplete encoding of the explanation provided by a low-credibility source. The result would be the construction of a network that was less competitive with the misconceived knowledge and therefore unable to reduce and/or eliminate activation of this knowledge. This should be evident both online and when subsequently tested offline. Specifically, online reading times of the correct-outcome sentence in the low-credibility condition should be slower than reading times of the correct-outcome sentence in the high-credibility condition. Offline posttest scores in the low-credibility condition should be lower than those in the high-credibility condition.

The materials and procedure were the same as in Experiment 2, but with one modification: Prior to reading the texts, participants were told that the credibility of the sources providing the information in the text would vary and that they should use that information when reading.

Method

Participants

The participants were 36 University of Minnesota undergraduate students. The average age of the participants was M = 19.9 (SD = 3.16), and 58% of them were female. Participants received partial course credit for their involvement in this study.

Materials

The same materials were used as in Experiment 2. As in that experiment, the credibility of the source of the refutation and explanation was manipulated to create a two-level, within-subjects factor: high or low credibility. The reliability of the posttest scores on the test was high, α = .90.

Procedure

The procedure used in the present study was similar to that of Experiment 2, with the following modification. When participants were being instructed on how to complete the task, the following instructional prompt was included:

Also, when reading through the passages, you will notice that some characters are much more credible sources of information than other sources. For example, a scientist would be a much more credible source regarding information about chemistry than a clown would be. That is, you would tend to believe information conveyed to you by a scientist and you would tend to discount or ignore information conveyed to you by a clown. While reading the passages, please pay particular attention to the credibility of the characters and the sources of information and make use of that information while reading the passages.

Results and discussion

The results of a repeated measures ANOVA showed that participants’ reading times for the correct-outcome sentences were significantly slower in the low-credibility condition (M = 1,910, SD = 507) than in the high-credibility condition (M = 1,769, SD = 409), F 1(1, 34) = 6.29, p < .05, η p 2 = .16; F 2(1, 18) = 6.24, p < .05, η p 2 = .26 (see Table 2). A repeated measures ANOVA was also conducted on the posttest scores. Participants scored significantly lower in the low-credibility condition (M = 17.86, SD = 6.73) than in the high-credibility condition (M = 19.97, SD = 6.89), F 1(1, 33) = 10.11, p < .01, η p 2 = .24Footnote 2; F 2(1, 18) = 9.42, p < .01, η p 2 = .34 (see Table 3). The posttest scores were further broken down into the true/false and explanation components. There was a marginal effect for the true/false items, with participants scoring lower in the low-credibility condition (M = 7.98, SD = 0.28) than in the high-credibility condition (M = 8.40, SD = 0.29), F 1(1, 33) = 3.39, p = .08, η p 2 = .09; F 2(1, 18) = 3.52, p = .08, η p 2 = .16. Participants scored significantly lower on the explanations in the low-credibility condition (M = 9.62, SD = 0.91) than in the high-credibility condition (M = 11.42, SD = 1.01), F 1(1, 33) = 8.15, p < .05, η p 2 = .20; F 2(1, 18) = 8.92, p < .01, η p 2 = .33 (see Table 3). Thus, the lower posttest performance in the low-credibility condition was driven primarily by performance on the explanation items.

The reading time and posttest findings taken together suggest that when a source’s credibility in a refutation text is made salient by explicit task instructions to take into account and use that credibility information, knowledge revision is disrupted for low-credibility sources. We discuss this finding in more detail next.

General discussion

In the present set of experiments, we examined the extent to which the credibility of a source providing the revised information in a refutation text influenced knowledge revision during reading. In doing so, first we established the effectiveness of a set of refutation texts in which the source providing the newly encoded information was highly credible. Indeed, the findings from Experiment 1 showed that the reading times of the correct-outcome sentence were significantly faster in the refutation condition than in the nonrefutation control condition. Furthermore, posttest scores were significantly higher in the refutation condition than in the nonrefutation control condition. These findings, taken together, suggested that the high-credibility refutation texts facilitated knowledge revision during reading. Next, in Experiment 2 we manipulated source credibility, to compare knowledge revision processes during the reading of high- and low-credibility refutation texts under normal reading conditions. We hypothesized that when the credibility of a source was not made salient, the knowledge revision process would unfold independent of the source. Experiment 2 provided evidence to support this hypothesis. Specifically, reading times of the correct-outcome sentences and posttest scores were not significantly different across the high- and low-credibility refutation text conditions. We also hypothesized that when the credibility of a source was made salient (via explicit instructions to attend to and use that information), the knowledge revision process would be differentially influenced by the source. Experiment 3 provided evidence to support this hypothesis. Specifically, reading times of the correct-outcome sentence were significantly slower in the low- than in the high-credibility condition, suggesting a “disruption” or negative influence of low credibility on the knowledge revision process. Furthermore, posttest scores were significantly lower in the low- than in the high-credibility condition, also suggesting a negative influence of low credibility on the knowledge revision outcome.

The present findings are novel in providing empirical support for the processing and representational influences of source credibility on knowledge revision. Knowledge revision in this study was conceptualized and tested in line with the KReC framework (Kendeou & O’Brien, 2014). According to KReC, for knowledge revision to be evident, newly encoded information must “win out” in reactivation over the previously acquired, misconceived information; this occurs when the new information draws increasing amounts of activation to itself, thus reducing interference from the old, misconceived information. An important factor in “winning” this competition is the interconnectedness of the causal explanation (Kendeou et al., 2014). Consistent with this account, the findings of the present set of experiments suggest that under normal reading conditions, during which the in-text source credibility was not made salient, the revision processes unfolded uninterrupted. In both high- and low-credibility refutation conditions, readers constructed a causal network that successfully competed with the misconceived knowledge. However, when the credibility of the source was made salient, the causal network constructed in the low-credibility condition did not successfully compete with the misconceived knowledge. This was evident both online and offline. Indeed, the interference from the misconceived knowledge online was higher in the low-credibility condition, as was evident by the slower reading times on the outcome sentence than in the high-credibility condition. Converging evidence for this interpretation comes from the offline findings. Specifically, the accuracy of the explanations included in the posttests was lower in the low-credibility than in the high-credibility condition. Indeed, the lower overall posttest scores in the low-credibility condition were driven primarily by lower scores on the explanation items.

An important question raised by these findings pertains to the exact mechanisms by which specific task instructions influenced bottom-up processes such as those hypothesized within KReC. As we noted earlier, one account for this effect is attention allocation. The specific task instructions to attend to and use source credibility likely resulted in increased attention allocation to the source information because this information was task relevant (McCrudden et al., 2010). This increased attention to the source, by definition, would result in reduced attention to other text information—in this case, the explanation. In turn, reduced attention to the explanation during encoding would result in the construction of a weaker and/or incomplete network because some information might not be encoded or would be encoded as invalid because the source was not credible. Even though this account is plausible, it necessitates differential attention allocation to the source as a function of credibility (with low-credibility sources attracting more attention than high-credibility sources) to adequately explain the findings. This differential attention to source credibility could be explained by validation processes (Cook & O’Brien, 2014; O’Brien & Cook, 2016; Richter, 2015). For example, low-credibility sources might evoke additional validation processes to evaluate the information provided by that source against general world knowledge, whereas in the high-credibility condition, such validation processes would be more readily completed. Future empirical work in this vein should directly test these accounts. For example, future research could examine attention allocation when the credibility of the source is made salient, by using eyetracking and/or reading time measures for the explanation sections of the refutation texts. These approaches would demonstrate whether readers engage in additional processes when reading explanations presented by low-credibility sources (i.e., reading times should be longer for low- than for high-credibility sources), or whether readers tend to “skip over” explanations provided by the low-credibility sources (i.e., reading times should be faster for low- than for high-credibility sources).

Another question raised by the findings of this study pertains to the role of source credibility on the “maintenance” of the knowledge revision effect. Previous work in knowledge revision has demonstrated that an effect similar to the one obtained in the present study is rather long-lasting, since it can be maintained even after a one-month delay (Kendeou et al., 2014). In this context, the extent to which the effect is short- or long-lasting depends primarily on the nature of the misconceived knowledge that has been targeted. In the present study, the focus was exclusively on the revision of knowledge that was misconceived at the individual belief level (Chi, 2008). However, revision at higher levels (i.e., at the mental-model level, or of actual behavior) is much harder to achieve and maintain. Thus, it is an open empirical question whether instructions to attend to and use source information in refutation texts would increase the likelihood of long-term revision at higher levels of misconceived knowledge.

In the present study, we focused on manipulating source credibility within a single document or as an in-text factor. However, documents by themselves are sources. In this context, sourcing is “the act of looking first to the source of the document before reading the body of the text” (Wineburg, 1991, p. 77). Consistent with the results of the present study, the literature on sourcing has also shown that readers do not engage in sourcing when reading texts under normal reading conditions (Goldman et al., 2012; Kim & Millis, 2006), and even when they do, nonexperts’ attention to source information is limited (Britt & Aglinksi, 2002; Kobayashi, 2014). Thus, taken together, the findings of the present study and those in the extant literature on sourcing suggest that source credibility plays an important part in reading comprehension and knowledge revision primarily when it is attended to. This conclusion suggests that it would be important to examine in future research how task instructions and text/document characteristics can be leveraged to enhance the saliency of source credibility, at the level of the text as a document or as an in-text factor.

Finally, even though it would have been preferred to include a nonrefutation control condition in Experiments 2 and 3, we opted to focus on the refutation condition only, because previous work had established the superiority of refutation texts over nonrefutation controls using the same paradigm. Indeed, the mean reading times and posttest scores obtained in Experiments 2 and 3 were comparable to those obtained in previous experiments that had used this paradigm (Kendeou et al., 2013; Kendeou et al., 2014). Nevertheless, within-experiment comparisons are always more desirable than across-experiment comparisons. For this reason, replicating these effects in within-experiment manipulations will be important for future research following up on this work.

In conclusion, the present set of experiments adds to our understanding of the text and task factors that constrain knowledge revision during reading. The present set of results demonstrates that when the credibility of a source providing information in a refutation text is made salient, and this source is of low credibility, the knowledge revision process will be disrupted. From a theory construction perspective, the results can help add to what we know about how readers use information when they are confronted with discrepant information, what information they use, and how that information may be used in the process of knowledge revision.