Skip to main content

Advertisement

Log in

Estimating Intervention Effectiveness: Synthetic Projection of Field Evaluation Results

  • Published:
Journal of Primary Prevention Aims and scope Submit manuscript

Abstract

In a 46-site, 5-year high-risk youth substance abuse prevention evaluation, effect sizes were adjusted using a meta-analytic regression technique to project potential effectiveness under more optimal research and implementation conditions. Adjusting effect size estimates to control for the impact of comparison group prevention exposure, service intensity, and coherent program implementation raised the mean effectiveness estimate from near zero (.02, SD = .21) to .24 (SD = .18). This finding suggests that adolescent prevention programs can have significant positive effects under optimal, yet obtainable conditions.

Editors’ Strategic Implications: The authors present a meta-analytic technique that promises to be an important tool for understanding what works in multi-site community-based prevention settings. Researchers will find this to be a creative approach to model the “noise’’ in implementation that may often overshadow the potential impact of prevention programs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Argyris, C. (1982). Research as action: Useable knowledge for understanding and changing the status quo. In N. Nicholson & T. D. Wall (Eds.), The theory and practice of organizational psychology (pp. 197–211). London: Academic Press.

    Google Scholar 

  • Backer, T. E. (2001). Finding the balance: Executive summary of a state-of-the-art review. Rockville, MD: Center for Substance Abuse Prevention, SAMHSA.

    Google Scholar 

  • Blueprints (2003). Blueprints for violence prevention: Overview. Retrieved February 9, 2003, from http://www.colorado.edu/cspv/blueprints/index.html.

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences, (2nd ed.) Hillsdale, NJ: Earlbaum.

    Google Scholar 

  • Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.

    Google Scholar 

  • Department of Education (2003). Safe, disciplined and drug-free expert panel, 2001. Retrieved February 9, 2003, from http://www.ed.gov/offices/OSDFS/expert_panel/drug-free.html.

  • Flay, B. (1986). Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 14, 451–474.

    Google Scholar 

  • Greenwald, P., & Caban, C. E. (1986). A strategy for cancer prevention and control research. Bulletin of the World Health Organization, 64, 73–78.

    CAS  PubMed  Google Scholar 

  • Hansen, W. B. (1992). School-based substance abuse prevention: A review of the state of the art in curriculum, 1980–1990. Health Education Research, 7, 403–430.

    CAS  PubMed  Google Scholar 

  • Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. NY: Academic Press.

    Google Scholar 

  • Holder, H., Flay, B., Howard, J., Boyd, G., Voas, R., & Grossman, M. (1999). Phases of alcohol prevention research. Alcoholism: Clinical and Experimental Research, 23, 183–194.

    CAS  Google Scholar 

  • Lerner, D., & Lasswell H. D. (Eds.) (1950). The policy sciences. Stanford, CA: Stanford University Press.

    Google Scholar 

  • Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental research. Newbury Park, CA: Sage.

    Google Scholar 

  • Mark, M. M., Henry, G. T., & Julnes, G. (1998). A realist theory of evaluation practice. New Directions for Evaluation, 78, 3–32.

    Google Scholar 

  • Pressman, J., & Wildavsky, A. (1973). Implementation: How great expectations in Washington are dashed in Oakland; or, Why it’s amazing that federal programs work at all, this being a saga of the Economic Development Administration as told by two sympathetic observers who seek to build morals on a foundation of ruined hopes. Berkeley, CA: University of California Press.

    Google Scholar 

  • Sambrano, S., Springer, J. F., & Hermann, J. (1997). Informing the next generation of prevention programs: CSAP’s Cross-site evaluation of the 1994–1995 High-Risk Youth grantees. Journal of Community Psychology, 25, 375–395.

    Google Scholar 

  • SAMHSA (2001). Closing the gap between research and practice: Lessons of the first three years of CSAP’s national CAPT system. Washington DC: SAMHSA/CSAP.

    Google Scholar 

  • SAMHSA (2002a). Preventing substance abuse: Major findings from the National Cross-Site Evaluation of High-Risk Youth programs. InPoints of Prevention, SAMHSA/CSAP Monograph Series #1 (pp. 1–29). Washington, DC: US Government Printing Office.

  • SAMHSA (2002b). Designing and implementing effective prevention programs for youth at high risk. In Points of Prevention, CSAP Monograph Series #3(pp. 1–34). Washington, DC: US Government Printing Office.

  • Schmidt, F. L. (1992). What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. American Psychologist, 47, 1174–1181.

    Google Scholar 

  • Schon, D. (1971). Beyond the stable state. New York: W.W. Norton.

    Google Scholar 

  • Sherman, L. W., Gottfredson, D. C., MacKenzie, D. L. Eck, J. Reuter, P., & Bushway, S. D. (1997). Preventing crime: What works, what doesn’t, what’s promising. A report to the United States Congress. Prepared for the National Institute of Justice. Retrieved June 17, 2003, from http://www.ncjrs.org/works/index.htm.

  • Springer, J. F., & Phillips, J. L. (1994). Policy learning and evaluation design: Lessons from the community partnership demonstration program. Journal of Community Psychology, CSAP Special Issue, 117–139.

  • Springer, J. F., Sale, E., Hermann, J., Sambrano, S., & Nistler, M. (2004). Characteristics of Effective Substance Abuse Prevention Programs for High-Risk Youth. Journal of Primary Prevention 25(3), pp. 171–194.

    Google Scholar 

  • Straw, R. B., & Herrell, J. M. (2002). A framework for understanding and improving multisite evaluations. New Directions for Evaluation, 94, 5–16.

    Google Scholar 

  • Tobler, N. S. (1986). Meta-analysis of 143 adolescent drug prevention programs: Quantitative outcome results of program participants compared to a control or comparison group. Journal of Drug Prevention, 11, 1–28.

    Google Scholar 

  • Tobler, N. S., & Stratton, H. H. (1997). Effectiveness of school-based drug prevention programs: A meta-analysis of the research. The Journal of Primary Prevention, 18, 71–128.

    Google Scholar 

  • Tobler, N. S., Roona, M. R., Ochshorn, P., Marshall, D. G., Streke, A. V., & Stackpole, A. M. (2000). School-based adolescent drug prevention programs: 1998 meta-analysis. The Journal of Primary Prevention, 20, 275–336.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elizabeth Sale.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Derzon, J.H., Sale, E., Springer, J.F. et al. Estimating Intervention Effectiveness: Synthetic Projection of Field Evaluation Results. J Primary Prevent 26, 321–343 (2005). https://doi.org/10.1007/s10935-005-5391-5

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10935-005-5391-5

Keywords

Navigation