Skip to main content
Log in

Critical Assumptions and Distribution Features Pertaining to Contemporary Single-Case Effect Sizes

  • Original Paper
  • Published:
Journal of Behavioral Education Aims and scope Submit manuscript

Abstract

The use of single-case effect sizes (SCESs) has increased in the intervention literature. Meta-analyses based on single-case data have also increased in popularity. However, few researchers who have adopted these metrics have provided an adequate rationale for their selection. We review several important statistical assumptions that should be considered prior to calculating and interpreting SCESs. We then more closely investigate a sampling of these newer procedures and conclude with critical analysis of the potential utility of these metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Allison, D. B., & Gorman, B. S. (1993). Calculating effect sizes for meta-analysis: The case of the single case. Behavior Research and Therapy, 31, 621–631.

    Article  Google Scholar 

  • Allison, D. B., & Gorman, B. S. (1994). Make things as simple as possible, but no simpler: A rejoinder to Scruggs and Mastropieri. Behavior Research and Therapy, 32, 885–890.

    Article  Google Scholar 

  • Bence, J. R. (1995). Analysis of short time series: Correcting for autocorrelation. Ecology, 76(2), 628–639.

    Article  Google Scholar 

  • Beretvas, S. N., & Chung, H. (2008). A review of meta-analyses of single-subject experimental designs: Methodological issues and practice. Evidence-Based Communication and Intervention, 2(3), 129–141.

    Article  Google Scholar 

  • Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 221–236). New York, NY: Russell Sage Foundation.

    Google Scholar 

  • Busk, P. L., & Marascuilo, L. A. (1988). Autocorrelation in single-subject research: A counterargument to the myth of no autocorrelation. Behavioral Assessment, 10, 229–242.

    Google Scholar 

  • Busk, P. L., & Serlin, R. C. (1992). Meta-analysis for single-case research. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case research design and analysis: New directions for psychology and education (pp. 187–212). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Cates, G. L., Skinner, C. H., Watson, T. S., Meadows, T. J., Weaver, A., & Jackson, B. (2003). Instructional effectiveness and instructional efficiency as considerations for data-based decision making: An evaluation of interspersing procedures. School Psychology Review, 32(4), 601–616.

    Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). London: Routledge.

    Google Scholar 

  • Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates Inc.

    Google Scholar 

  • Ferron, J. M., Bell, B. A., Hess, M. R., & Rendina-Gobioff, G. (2009). Making treatment inferences from multiple-baseline data: The utility of multilevel modeling approaches. Behavior Research Methods, 41, 372–384.

    Article  PubMed  Google Scholar 

  • Gorsuch, R. L. (1983). Three methods for analyzing time-series (N of 1) data. Behavioral Assessment, 5, 141–154.

    Google Scholar 

  • Gotman, J. M., & Glass, G. G. (1978). Analysis of interrupted time-series experiments. In T. R. Kratochwill (Ed.), Single subject research: Strategies for evaluating change (pp. 197–234). New York: Academic Press.

    Chapter  Google Scholar 

  • Grissom, R. J. (2000). Heterogeneity of variance in clinical data. Journal of Consulting and Clinical Psychology, 68(1), 155–165.

    Article  PubMed  Google Scholar 

  • Horner, R. H., Swaminathan, H., Sugai, G., & Smolkowski, K. (2009). Expanding analysis and use of single-case research. Washington, DC: Institute for Education Sciences, U.S. Department of Education.

    Google Scholar 

  • Huitema, B. E., & McKean, J. W. (1991). Autocorrelation estimation and inference with small samples. Psychology Bulletin, 110(2), 291–304.

    Article  Google Scholar 

  • Individuals with Disabilities in Education Act of 2004. (2003). Pub. L. No. 101-476. 101st Congress.

  • Jones, R. R., Weinrott, M. R., & Vaught, R. S. (1978). Effects of serial dependency on the agreement between visual and statistical inference. Journal of Applied Behavior Analysis, 11, 277–283.

    Article  PubMed Central  PubMed  Google Scholar 

  • Joseph, L. M., & Schsiler, R. A. (2007). Getting the “most bang for your buck”: Comparison of the effectiveness and efficiency of phonic and while word reading techniques during repeated reading lessons. Journal of Applied Psychology, 24(1), 69–90.

    Google Scholar 

  • Kendall, M. G. (1970). Rank correlation methods (4th ed.). London: Charles Griffin & Co.

    Google Scholar 

  • Lix, L. M., Keselman, J. C., & Keselman, H. J. (1996). Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance F test. Review of Educational Research, 66(4), 579–619.

    Google Scholar 

  • Ma, H. H. (2006). An alternative method for quantitative synthesis of single-subject research: Percentage of data points exceeding the median. Behavior Modification, 30, 598–617.

    Article  PubMed  Google Scholar 

  • Maggin, D. M., Swaminathan, H., Rogers, H. J., O’Keefe, B. V., Sugai, G., & Horner, R. H. (2011). A generalized least squares regression approach for computing effect sizes in single-case research: Application examples. Journal of School Psychology, 49, 301–321. doi:10.1016/j.jsp.2011.03.044.

    Article  PubMed  Google Scholar 

  • Manalov, R., & Solanas, A. (2008). Comparing N = 1 effect size indices in presence of autocorrelation. Behavior Modification, 32(6), 860–875.

    Article  Google Scholar 

  • Manolov, R., & Solanas, A. (2013). A comparison of mean phase difference and generalized least squares for analyzing single-case data. Journal of School Psychology, 51(2), 201–215. doi:10.1016/j/jsp.2012.12.005.

    Article  PubMed  Google Scholar 

  • Matyas, T. A., & Greenwood, K. M. (1990). Visual analysis for single-case time series: Effects of variability, serial dependence, and magnitude of intervention effects. Journal of Applied Behavior Analysis, 10, 308–320.

    Google Scholar 

  • Mercer, S. H., & Sterling, H. E. (2012). The impact of baseline trend control on visual analysis of single-case data. Journal of School Psychology, 50, 403–419. doi:10.1016/j.jsp.2011.11.004.

    Article  PubMed  Google Scholar 

  • No Child Left Behind Act of 2001. (2002). Pub. L. No. 107-110. 107th Congress.

  • Parker, R. I., Hagen-Burke, S., & Vannest, K. I. (2007). Percentage of all non-overlapping data (PAND): An alternative to PND. Journal of Special Education, 40, 194–204.

    Article  Google Scholar 

  • Parker, R. I., & Vannest, K. (2009). An improved effect size for single-case research: Nonoverlap of all pairs. Behavior Therapy, 40, 357–367.

    Article  PubMed  Google Scholar 

  • Parker, R. I., Vannest, K., & Davis, J. L. (2011a). Effect size in single-case research: A review of nine nonoverlap methods. Behavior Modification, 35(4), 303–322.

    Article  PubMed  Google Scholar 

  • Parker, R. I., Vannest, K. I., Davis, J. L., & Sauber, S. B. (2011b). Combining nonoverlap and trend for single-case research: Tau-U. Behavior Therapy, 42, 284–299. doi:10.1177/0145445511399147.

    Article  PubMed  Google Scholar 

  • Peterson-Brown, S., Karich, A. C., & Symons, F. J. (2012). Examining estimates of effect using non-overlap of all pairs in multiple baseline studies of academic intervention. Journal of Behavioral Education, 21, 203–216.

    Article  Google Scholar 

  • Poncy, B. C., Duhon, G. J., Lee, S. B., & Key, A. (2010). Evaluation of techniques to promote generalization with basic math fact skills. Journal of Behavioral Education, 19, 76–92.

    Article  Google Scholar 

  • Poncy, B. C., Solomon, B. G., Duhon, G. J., Moore, K., Simons, S., & Skinner, C. H. (in press). An analysis of learning rate and curricular scope: Use caution when choosing academic interventions based on aggregated outcomes. School Psychology Review.

  • Scruggs, M., & Casto, B. (1987). The quantitative synthesis of single-subject research. Remedial and Special Education, 8, 24–33.

    Article  Google Scholar 

  • Scruggs, M. A., & Mastropieri, M. A. (2013). PND at 25: Past, present, and future trends in summarizing single-case research. Remedial and Special Education, 34(1), 9–19.

    Article  Google Scholar 

  • Shadish, W. R., Hedges, L. V., & Pustejovsky, J. E. (2014a). Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: A primer and applications. Journal of School Psychology, 52(2), 123–147.

    Article  PubMed  Google Scholar 

  • Shadish, W. R., Hedges, L. V., Pustejovsky, J. E., Boyaajian, J. G., Sullivan, K. J., Andrade, A., & Barrientos, J. L. (2014b). A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic. Neuropsyhcological Rehabilitation, 24(3–4), 528–553.

    Article  Google Scholar 

  • Shadish, W. R., Kyse, E. N., & Rindskopf, D. M. (2013). Analyzing data from single-case designs using multilevel models: New applications and some agenda items for future research. Psychological Methods, 18(3), 385–405.

    Article  PubMed  Google Scholar 

  • Shadish, W. R., Rindskopf, D. M., & Hedges, L. V. (2008). The state of the science in the meta-analysis of single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 2(3), 188–196.

    Article  Google Scholar 

  • Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43, 195–216. doi:10.1177/0145445510363306.

    Article  Google Scholar 

  • Solanas, A., Manalov, R., & Onghena, P. (2010). Estimating slope and level change in N = 1 designs. Behavior Modification, 34, 195–219.

    Article  PubMed  Google Scholar 

  • Solomon, B. G. (2014). Violations of assumptions in school-based single-case data: Implications for the selection and interpretation of effect sizes. Behavior Modification, 38(4), 477–496.

    Article  PubMed  Google Scholar 

  • Solomon, B. G., Klein, S. A., Hintze, J. M., Cressey, J. M., & Peller, S. L. (2012a). A meta-analysis of school-wide positive behavior support: An exploratory study using single-case synthesis. Psychology in the Schools, 49(2), 105–121. doi:10.1002/pits.20625.

    Article  Google Scholar 

  • Solomon, B. G., Klein, S. A., & Politylo, B. C. (2012b). The effect of performance feedback on teachers’ treatment integrity: A meta-analysis of the single-case literature. School Psychology Review, 41(2), 160–175.

    Google Scholar 

  • Swaminathan, H., Horner, R. H., Sugai, G., Smolkowski, K., Hedges, L., & Spaulding, S. A. (2010). Application of generalized least squares regression to measure effect size in single-case research: A technical report. Unpublished technical report, Institute for Education Sciences.

  • Van de Noortgate, W., & Onghena, P. (2003). Combining single-case experimental data using hierarchical linear models. School Psychology Quarterly, 18, 325–346.

    Article  Google Scholar 

  • White, O. (1987). Some comments concerning “the quantitative synthesis of single-subject research”. Remedial and Special Education, 8, 34–39.

    Article  Google Scholar 

  • Wolery, M. (2013). A commentary: Single-case design technical document of the What Works Clearinghouse. Remedial and Special Education, 43(1), 39–43.

    Article  Google Scholar 

  • Wolery, M., Busick, M., Reichow, B., & Barton, E. E. (2010). Comparison of overlap methods for quantitatively synthesizing single-subject data. The Journal of Special Education, 44(1), 18–28. doi:10.1177/0022466908328009.

    Article  Google Scholar 

  • Yue, S., Pilon, P., Phinney, B., & Cavadias, G. (2002). The influence of autocorrelation on the ability to detect trend in hydrological series. Hydrological Processes, 16, 1807–1829. doi:10.1002/hy.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benjamin G. Solomon.

Appendix: Basic Assumption Testing in SPSS 21.0 Using Dropdown Menus

Appendix: Basic Assumption Testing in SPSS 21.0 Using Dropdown Menus

Calculating skew and kurtosis

 Analyze

  Descriptive statistics

   Descriptives

   Options (check skew and kurtosis)

Generating a boxplot to review normality

 Graphs

  Legacy dialogs

   Boxplot (simple)

Generating a Q–Q plot to review normality

 Analyze

  Descriptive statistics

   Q–Q plots (check normal)

Levene’s Test of homogeneity

 Analyze

  Compare means

   Independent samples t test (Levene’s is part of the default output)

Testing parametric linear trend

Create a time series variable (e.g., 1, 2, 3, 4, 5, 6…) equal to the length of the phase data

 Analyze

  Regression

   Linear (input raw data and time variable)

   Note that the Durbin–Watson test is also available in this module under “statistics”

  A visual inspection of the graph will also be telling

Testing heteroscedasticity

Create a time series variable (e.g., 1, 2, 3, 4, 5, 6…) equal to the length of the phase data

 Analyze

  Regression

   Linear (input phase data and time variable)

    Plots (select predicted residuals for Y and raw residuals X). Inspect plot

Testing nonparametric linear trend

Create a time-series variable (e.g., 1, 2, 3, 4, 5, 6…) equal to the length of the phase data

 Analyze

  Correlate

   Bivariate (check Kendall’s Tau-b)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Solomon, B.G., Howard, T.K. & Stein, B.L. Critical Assumptions and Distribution Features Pertaining to Contemporary Single-Case Effect Sizes. J Behav Educ 24, 438–458 (2015). https://doi.org/10.1007/s10864-015-9221-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10864-015-9221-4

Keywords

Navigation