Skip to main content
Log in

What Works Clearinghouse Standards and Generalization of Single-Case Design Evidence

  • Commentary
  • Published:
Journal of Behavioral Education Aims and scope Submit manuscript

Abstract

A recent review of existing rubrics designed to help researchers evaluate the internal and external validity of single-case design (SCD) studies found that the various options yield consistent results when examining causal arguments. The authors of the review, however, noted considerable differences across the rubrics when addressing the generalization of findings. One critical finding is that the What Works Clearinghouse (WWC) review process does not capture details needed for report readers to evaluate generalization. This conclusion is reasonable if considering only the WWC’s SCD design standards. It is important to note that these standards are not used in isolation, and thus generalization details cannot be fully understood without also considering the review protocols and a tool called the WWC SCD review guide. Our purpose in this commentary is to clarify how the WWC review procedures gather information on generalization criteria and to describe a threshold for judging how much evidence is available. It is important to clarify how the system works so that the SCD research community understands the standards, which in turn might facilitate use of future WWC reports and possibly influence both the conduct and the reporting of SCD studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Shadish et al. (2002) argue that internal validity, or the degree to which a causal relationship exists between a treatment and outcome variable is valid, is the sin qua non of experimental design. In other words, there might not be much point in carefully pondering the external validity (which is related to generalization) of studies that do not yield strong evidence of a causal effect. This position is because, if one cannot demonstrate that a given treatment was responsible for some outcome, then there is little point in examining whether the evidence generalizes to different contexts. As applied to SCDs, if one has no or limited confidence that there is a functional relationship between a treatment (independent variable) and dependent variable, then why do the hard work of generalizing?

  2. The WWC SRG Review Guide is subject to change. A copy of the current Review Guide is available here: http://ies.ed.gov/ncee/wwc/DownloadSRG.aspx. The study Review Guide used by and/or referenced herein was developed by the U.S. Department of Education, Institute of Education Sciences through its What Works Clearinghouse project and was used by the authors with permission from the Institute of Education Sciences. Neither the Institute of Education Sciences nor its contractor administrators of the What Works Clearinghouse endorse the content herein.

  3. So are other common criteria such as setting p values at .05, see Cohen (1994).

References

  • Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single case experimental designs: Strategies for studying behavior change (2nd ed.). Boston, MA: Pearson.

    Google Scholar 

  • Bowman-Perrott, L., Davis, H., Vannest, K. J., Williams, L., Greenwood, C. R., & Parker, R. (2013). Academic benefits of peer tutoring: A meta-analytic review of single-case research. School Psychology Review, 42(1), 39–59.

    Google Scholar 

  • Cohen, J. (1994). The Earth is round (p < .05). American Psychologist, 49(12), 997–1003. doi:10.1037//0003-066X.49.12.997.

    Article  Google Scholar 

  • Dart, E. H., Collins, T. A., Klingbeif, D. A., & McKinley, L. E. (2014). Peer management interventions: A meta-analytic review of single-case research. School Psychology Review, 43, 367–384.

    Article  Google Scholar 

  • Deegear, J., & Lawson, D. M. (2003). The utility of empirically supported treatments. Professional Psychology: Research and Practice, 34(3), 271–277. doi:10.1037/0735-7028.34.3.271.

    Article  Google Scholar 

  • Hedges, L. V. (2013). Recommendations for practice: Justifying claims of generalizability. Educational Psychology Review, 25(3), 331–337. doi:10.1007/s10648-013-9239-x.

    Article  Google Scholar 

  • Hitchcock, J. H., Horner, R. H., Kratochwill, T. R., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2014). The What Works Clearinghouse single-case design pilot standards: Who will guard the guards? Remedial and Special Education Advance online publication. doi:10.1177/0741932513518979. contributors are listed by alphabetical order.

    Google Scholar 

  • Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single subject research to identify evidence-based practice in special education. Exceptional Children, 71(2), 165–179.

    Article  Google Scholar 

  • Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2nd ed.). New York, NY: Oxford University Press.

    Google Scholar 

  • Kratochwill, T. R. (2002). Evidence-based interventions in school psychology: Thoughts on thoughtful commentary. School Psychology Quarterly, 17, 518–532. doi:10.1521/scpq.17.4.518.20861.

    Article  Google Scholar 

  • Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D., & Shadish, W. R. M. (2010). Single case designs technical documentation. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf.

  • Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34, 26–38. doi:10.1177/0741932512452794.

    Article  Google Scholar 

  • Kratochwill, T. R., & Levin, J. R. (Eds.). (2014). Single-case intervention research: Methodological and statistical advances. Washington, DC: American Psychological Association.

    Google Scholar 

  • Kratochwill, T. R., & Stoiber, K. C. (2000). Diversifying theory and science: Expanding boundaries of empirically supported interventions in schools. Journal of School Psychology, 38, 349–358. doi:10.1016/S0022-4405(00)00039-X.

    Article  Google Scholar 

  • Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school psychology: Conceptual foundations of the Procedural and Coding Manual of Division 16 and the Society for the Study of School Psychology Task Force. School Psychology Quarterly, 17, 341–389.

    Article  Google Scholar 

  • Maggin, D. M., Briesch, A. M., Chafouleas, S. M., Ferguson, T. D., & Clark, C. (2013). A comparison of rubrics for identifying empirically supported practices with single-case research. Journal of Behavioral Education, 23, 287–311. doi:10.1007/s10864-013-9187-z.

    Article  Google Scholar 

  • Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W.H., & Shavelson, R.J. (2007). Estimating casual effects using experimental and nonexperimental designs (report from the Governing Board of the American Educational Research Association Grants Program). Washington, DC: American Educational Research Association.

  • Shadish, W. R. (1995). The logic of generalization: Five principles common to experiments and ethnographies. American Journal of Community Psychology, 23, 419–428. doi:10.1007/BF02506951.

    Article  Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

    Google Scholar 

  • Smith, J. D. (2012). Single-case experimental designs: A systematic review of published research and current standards. Psychological Methods, 17(4), 510–550. doi:10.1037/a0029312.

    Article  PubMed  Google Scholar 

  • Wendt, O., & Miller, B. (2012). Quality appraisal of single-subject experimental designs: An overview and comparison of different appraisal tools. Education and Treatment of Children, 35(3), 235–265.

    Article  Google Scholar 

  • What Works Clearinghouse. (2013). Procedures and standards handbook (Version 3.0). Retrieved from http://ies.ed.gov/ncee/wwc/documentsum.aspx?sid=19.

  • What Works Clearinghouse (2014) WWC intervention report: Repeated Reading. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/intervention_reports/wwc_repeatedreading_051314.pdf.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John H. Hitchcock.

Additional information

Some of the information contained herein is based on the What Works Clearinghouse’s Single-case design technical documentation version 1.0 (Pilot) (referred to as the Standards in this article) produced by two of the current authors (Kratochwill and Hitchcock) and the Panel members and available at http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf. The Standards that are described in the technical documentation were developed by a Panel of authors for the Institute of Education Sciences (IES) under Contract ED-07-CO-0062 with Mathematica Policy Research, Inc. to operate the What Works Clearinghouse (WWC). The content of this article does not necessarily represent the views of the Institute of Education Sciences or the WWC.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hitchcock, J.H., Kratochwill, T.R. & Chezan, L.C. What Works Clearinghouse Standards and Generalization of Single-Case Design Evidence. J Behav Educ 24, 459–469 (2015). https://doi.org/10.1007/s10864-015-9224-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10864-015-9224-1

Keywords

Navigation