Elsevier

Journal of School Psychology

Volume 68, June 2018, Pages 99-112
Journal of School Psychology

Using response ratios for meta-analyzing single-case designs with behavioral outcomes

https://doi.org/10.1016/j.jsp.2018.02.003Get rights and content

Abstract

Methods for meta-analyzing single-case designs (SCDs) are needed to inform evidence-based practice in clinical and school settings and to draw broader and more defensible generalizations in areas where SCDs comprise a large part of the research base. The most widely used outcomes in single-case research are measures of behavior collected using systematic direct observation, which typically take the form of rates or proportions. For studies that use such measures, one simple and intuitive way to quantify effect sizes is in terms of proportionate change from baseline, using an effect size known as the log response ratio. This paper describes methods for estimating log response ratios and combining the estimates using meta-analysis. The methods are based on a simple model for comparing two phases, where the level of the outcome is stable within each phase and the repeated outcome measurements are independent. Although auto-correlation will lead to biased estimates of the sampling variance of the effect size, meta-analysis of response ratios can be conducted with robust variance estimation procedures that remain valid even when sampling variance estimates are biased. The methods are demonstrated using data from a recent meta-analysis on group contingency interventions for student problem behavior.

Section snippets

Log response ratios

The LRR effect size is defined based on a simple model for the data from a baseline phase and an intervention phase within a single-case design. Suppose that the baseline phase includes m sessions, with outcome data Y1A, …YmA, and that the intervention phase includes n sessions, with outcome data Y1B, …, YnB. Let us assume that the average level of the outcome is constant within each phase (i.e., lacking any systematic time trend). Let μA denote the mean level of the outcome during the baseline

Preparing LRR estimates for use in meta-analysis

When considering use of LRR effect sizes for synthesizing multiple SCD studies, researchers must address several further issues before carrying out effect size calculations and meta-analysis. This section describes three issues and methods for addressing each, including (1) how to determine whether the LRR is an appropriate effect size metric, (2) how to transform the effect sizes so that their signs (positive or negative) are consistent with the direction of therapeutic improvement for the

Meta-analysis with robust variance estimation

Meta-analysis is a set of statistical techniques for synthesizing results across studies in order to draw generalizations about overall patterns of findings (Borenstein et al., 2009). Meta-analysis can be used to address questions about the overall average magnitude of effects, the degree of consistency or inconsistency (heterogeneity) of results across studies, and characteristics of participants or studies that moderate the magnitude of effect sizes.

In synthesis of between-groups research

Discussion

In this paper, I have demonstrated the use of a recently proposed effect size index, the log response ratio, for meta-analysis of SCDs with behavioral outcome measures. Compared to meta-analysis based on other effect size indices, the proposed methods are distinctive in several respects.

First, development of the LRR was motivated by a realistic model for systematic direct observation procedures (Pustejovsky, 2015), and the index is thus designed to work well with behavioral outcomes. Other

Acknowledgements

This work was supported by Grant R305D160002 from the Institute of Educational Sciences, U.S. Department of Education. The opinions expressed are those of the author and do not represent the views of the Institute or the U.S. Department of Education. The author is grateful to Daniel Maggin, David Rindskopf, Tsuyoshi Yamada, and Kathleen Zimmerman for feedback on draft versions of this paper.

References (60)

  • J.M. Campbell et al.

    Statistics and single subject research methodology

  • E.A. Common et al.

    Functional assessment-based interventions for students with or at-risk for high-incidence disabilities: Field testing single-case synthesis methods

    Remedial and Special Education

    (2017)
  • Council for Exceptional Children Working Group

    Council for Exceptional Children: Standards for evidence-based practices in special education

    Teaching Exceptional Children

    (2014)
  • Geomatrix

    XYit

  • W.J. Gingerich

    Meta-analysis of applied time-series data

    The Journal of Applied Behavioral Science

    (1984)
  • A.K. Heath et al.

    A meta-analytic review of functional communication training across mode of communication, age, and disability

    Review Journal of Autism and Developmental Disorders

    (2015)
  • L.V. Hedges

    What are effect sizes and why do we need them?

    Child Development Perspectives

    (2008)
  • L.V. Hedges et al.

    The meta-analysis of response ratios in experimental ecology

    Ecology

    (1999)
  • L.V. Hedges et al.

    Robust variance estimation in meta-regression with dependent effect size estimates

    Research Synthesis Methods

    (2010)
  • J.H. Hitchcock et al.

    What Works Clearinghouse standards and generalization of single-case design evidence

    Journal of Behavioral Education

    (2015)
  • R.H. Horner et al.

    Synthesizing single-case research to identify evidence-based practices: Some brief reflections

    Journal of Behavioral Education

    (2012)
  • R.H. Horner et al.

    Considerations for the systematic analysis and use of single-case research

    Education and Treatment of Children

    (2012)
  • B.E. Huitema et al.

    Irrelevant autocorrelation in least-squares intervention models

    Psychological Methods

    (1998)
  • B.E. Huitema et al.

    Identifying autocorrelation generated by various error processes in interrupted time-series regression designs: A comparison of AR1 and portmanteau tests

    Educational and Psychological Measurement

    (2007)
  • S. Kahng et al.

    Behavioral treatment of self-injury, 1964 to 2000

    American Journal of Mental Retardation: AJMR

    (2002)
  • T.R. Kratochwill et al.

    Single-case intervention research design standards

    Remedial and Special Education

    (2013)
  • T.R. Kratochwill et al.

    Visual analysis of single-case intervention research: Conceptual and methodological issues

  • K.L. Lane et al.

    An examination of the evidence base for function-based interventions for students with emotional and/or behavioral disorders attending middle and high schools

    Exceptional Children

    (2009)
  • J.R. Ledford et al.

    Antecedent social skills interventions for individuals with ASD: What works, for whom, and under what conditions?

    Focus on Autism and Other Developmental Disabilities

    (2018)
  • R.C. Littell et al.

    SAS system for linear mixed models

    (2006)
  • Cited by (113)

    • A global assessment of the long-term effects of biochar application on crop yield

      2024, Current Research in Environmental Sustainability
    View all citing articles on Scopus

    An earlier version of this paper was presented at the annual convention of the American Educational Research Association, April 28, 2017 in San Antonio, Texas. Supplementary materials are available at https://osf.io/c3fe9/.

    View full text