Abstract
In criminal justice as well as other areas, practitioners and/or policy makers often wish to know whether something “works” or is “effective.” Does a certain form of family therapy reduce troubled adolescents’ involvement in crime more than what would be seen if they were on probation? Does a jail diversion policy substantially increase indicators of community adjustment for mentally ill individuals who are arrested and processed under this policy? If so, by how much?
Trying to gauge the impact of programs or policies is eminently logical for several reasons. Obviously, this type of information is important from a traditional cost-benefit perspective. Knowing the overall impact of a program in terms of tangible and measurable benefits to some target group of interest is necessary to assess whether an investment in the program buys much. For instance, a drug rehabilitation program, which requires a large fixed cost of opening plus additional considerable operating expenses, should be able to show that this investment is worth it in terms of reduced drug use or criminal activity among its clients. Quantifiable estimates about the impact of policies or programs are also important in assessing the overall social benefit of particular approaches; it is often useful to know how much a recent change in policy has affected some subgroup in an unintended way. For instance, more stringent penalties for dealing crack, rather than powdered cocaine, appears to have provided only a marginal decrease in drug trafficking at the expense of considerable racial disparity in sentencing. Informed practice and policy rests on empirical quantifications of how much outcomes shift when certain approaches or policies are put into place.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
For a thorough overview of the framework of the Rubin Causal Model, see Holland (1986).
- 2.
- 3.
The concept of noncompliance should be thought of in a purely statistical interpretation in this case, where it literally means not adhering to the randomly assigned treatment. Often, particularly in some clinical applications, the term noncompliant can have a negative connotation, as in lack of willingness to accept a helpful therapy. Noncompliance can occur for a variety of reasons, not simply lack of insight or stubbornness, and should therefore not be thought to indicate anything negative about an individual when used in this context. For instance, if a chronic headache sufferer is randomized into the treatment group testing the effectiveness of a new drug and chooses to not take the drug for the simple reason that there is no pain at the time of treatment, then this individual is a non-complier as defined here.
- 4.
Angrist, Imbens and Rubin also defines a group known as never-takers, or those who, regardless of the instrument, never select into treatment, and therefore are not included as part of the treatment group. Furthermore, the assumption of monotonicity effectively rules out the existence of defiers, or those who would have selected into treatment had the instrument made them less likely to do so, but not selected into treatment had their value of the instrument made them more likely to do so.
References
Angrist JD (1990) Lifetime earnings and the Vietnam era draft lottery: evidence from social security administrative records. Am Econ Rev 80:313–335
Angrist JD (2004) Treatment effect heterogeneity in theory and practice, The Royal Economic Society Sargan Lecture. Econ J 114:C52–C83
Angrist JD (2006) Instrumental variables methods in experimental criminological research: what, why, and how. J Exp Criminol 2:23–44
Angrist J, Imbens G, Rubin DB (1996) Identification of causal effects using instrumental variables. J Am Stat Assoc 91:444–455
Heckman JJ (1997) Instrumental variables: a study of implicit behavioral assumptions used in making program evaluations. J Hum Resour 32(2):441–462
Heckman JJ, Smith JA (1995) Assessing the case for social experiments. J Econ Perspect 9(2):85–110
Holland PW (1986) Statistics and causal inference. J Am Stat Assoc 81:945–960
Berk RA, Sherman LW (1988) Police response to family violence incidents: an analysis of an experimental design with incomplete randomization. J Am Stat Assoc 83(401):70–76
Imbens GW, Angrist JD (1994) Identification and estimation of local average treatment effects. Econometrica 62: 467–475
LaLonde RJ (1986) Evaluating the econometric evaluations of training programs with experimental data. Am Econ Rev 76:604–620
Manski CF (1995) Identification problems in the social sciences. Harvard University Press, Cambridge
McCaffrey DF, Ridgeway G, Morral AR (2004) Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychol Methods 9(4):403–425
Needleman HL, Riess JA, Tobin MJ, Biesecker GE, Greenhouse JB (1996) Bone lead levels and delinquent behavior. J Am Med Assoc 275(5):363–369
Nevin R (2000) How lead exposure relates to temporal changes in IQ, violent crime, and unwed pregnancy. Environ Res 83(1):1–22
Neyman JS (1923) On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Stat Sci 4:465–480
Ridgeway G (2006) Assessing the effect of race bias in post-traffic stop outcomes using propensity scores. J Quant Criminol 22(1):1–29
Robins JM, Greenland S, Hu F-C (1999) Estimation of the causal effect of a time-varying exposure on the marginal mean of a repeated binary outcome. J Am Stat Assoc 94:687–700
Robins JM, Hernan MA, Brumback B (2000) Marginal structural models and causal inference in epidemiology. Epidemiology 11(5):550–560
Rosenbaum PR (2002) Observational studies, 2nd edn. Springer-Verlag, New York
Rosenbaum P, Rubin DB (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55
Rosenbaum PR, Rubin DB (1985) The bias due to incomplete matching. Biometrics 41:103–116
Rubin DB (1974) Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol 66:688–701
Rubin DB (1977) Assignment to treatment groups on the basis of a covariate. J Educ Stat 2:1–26
Rubin DB (1978) Bayesian inference for causal effects: the role of randomization. Ann Stat 6:34–58
Shadish WR, Cook TD, Campbell DT (2001) Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin, Boston
Sherman LW, Berk RA (1984) The specific deterrent effects of arrest for domestic assault. Am Sociol Rev 49(2):261–272
Weisburd D, Lum C, Petronsino A (2001) Does research design affect study outcomes in criminal justice? Ann Am Acad Pol Soc Sci 578:50–70
Wright JP, Dietrich KN, Ris MD, Hornung RW, Wessel SD, Lanphear BP, Ho M, Rae MN (2008) Association of prenatal and childhood blood lead concentrations with criminal arrests in early adulthood. PLoS Med 5:e101
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Loughran, T.A., Mulvey, E.P. (2010). Estimating Treatment Effects: Matching Quantification to the Question. In: Piquero, A., Weisburd, D. (eds) Handbook of Quantitative Criminology. Springer, New York, NY. https://doi.org/10.1007/978-0-387-77650-7_9
Download citation
DOI: https://doi.org/10.1007/978-0-387-77650-7_9
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-77649-1
Online ISBN: 978-0-387-77650-7
eBook Packages: Humanities, Social Sciences and LawSocial Sciences (R0)