Developing and evaluating complex interventions: The new Medical Research Council guidance☆
Section snippets
Revisiting the 2000 MRC framework
As experience of evaluating complex interventions has accumulated since the 2000 framework was published, interest in the methodology has also grown. Several recent papers have identified limitations in the framework, recommending, for example, greater attention to early phase piloting and development work (Hardeman et al., 2005), a less linear model of evaluation process (Campbell et al., 2007a), integration of process and outcome evaluation (Oakley et al., 2006), recognition that complex
What are complex interventions?
Complex interventions are usually described as interventions that contain several interacting components, but they have other characteristics that evaluators should take into account (Box 1). There is no sharp boundary between simple and complex interventions. Few interventions are truly simple, but the number of components and range of effects may vary widely. Some highly complex interventions, such as the Sure Start intervention to support families with young children in deprived communities (
Development, evaluation, and implementation
The 2000 framework characterised the process of development through to implementation of a complex intervention in terms of the phases of drug development. Although it is useful to think in terms of phases, in practice these may not follow a linear or even a cyclical sequence (Fig. 1) (Campbell et al., 2007a).
Best practice is to develop interventions systematically, using the best available evidence and appropriate theory, then to test them using a carefully phased approach, starting with a
Developing a complex intervention
Identifying existing evidence—Before a substantial evaluation is undertaken, the intervention must be developed to the point where it can reasonably be expected to have a worthwhile effect. The first step is to identify what is already known about similar interventions and the methods that have been used to evaluate them. If there is no recent, high quality recent systematic review of the relevant evidence, one should be conducted and updated as the evaluation proceeds.
Identifying and
Assessing feasibility
Evaluations are often undermined by problems of acceptability, compliance, delivery of the intervention, recruitment and retention, and smaller than expected effect sizes that could have been predicted by thorough piloting (Eldridge et al., 2004). A feasibility study for an evaluation of an adolescent sexual health intervention in rural Zimbabwe found that the planned classroom based programme was inappropriate, given cultural norms, teaching styles, and relationships between teachers and
Evaluating a complex intervention
There are many study designs to choose from, and different designs suit different questions and circumstances. Researchers should beware of blanket statements about what designs are suitable for what kind of intervention and choose on the basis of specific characteristics of the study, such as expected effect size and likelihood of selection or allocation bias. Awareness of the whole range of experimental and non-experimental approaches should lead to more appropriate methodological choices.
Conclusions
We recognise that many issues surrounding evaluation of complex interventions are still debated, that methods will continue to develop, and that practical applications will be found for some of the newer theories. We do not intend the revised guidance to be prescriptive but to help researchers, funders, and other decision-makers to make appropriate methodological and practical choices. We have primarily aimed our messages at researchers, but publishers, funders, and commissioners of research
Contributors
PD had the idea of revising and updating the MRC framework. It was further developed at a workshop co-convened with Sally Macintyre and Janet Darbyshire, and organised with the help of Linda Morris on 15–16 May 2006. Workshop participants and others with an interest in the evaluation of complex interventions were invited to comment on a draft of the revised guidance, which was also reviewed by members of the MRC Health Services and Public Health Research Board and MRC Methodology Research
Funding
MRC Health Services and Public Health Research Board and the MRC Population Health Sciences Research Network.
Competing interests
None declared.
Provenance and peer review
Not commissioned; externally peer reviewed.
References (36)
- et al.
Effect of air pollution control on death rates in Dublin, Ireland: an intervention study
Lancet
(2002) - et al.
Retrofitting houses with insulation to reduce health inequalities: aims and methods of a clustered community-based trial
Social Science and Medicine
(2005) - et al.
Reliable assessment of the effects of treatment on mortality and major morbidity, II. Observational studies
Lancet
(2001) Let science rule: the rational way to run societies
New Scientist
(2008)- et al.
Evaluating health effects of transport interventions: methodologic case study
American Journal of Preventive Medicine
(2006) - et al.
Changing schools, changing health? Design and implementation of the Gatehouse Project
Journal of Adolescent Health
(2003) An exemplary evaluation of a program that worked: The High/Scope Perry preschool project
American Journal of Evaluation
(1995)- et al.
A taxonomy of behavior change techniques used in interventions
Health Psychology
(2008) Identifying the Environmental Causes of Disease: How Should We Decide What to Believe and When to Take Action?
(2007)- et al.
National Evaluation of Sure Start Research Team, Effects of Sure Start local programmes on children and families: early findings from a quasi-experimental, cross sectional study
British Medical Journal
(2006)
Why we need observational studies to evaluate the effectiveness of health care
British Medical Journal
Extending the CONSORT statement to randomized trials of non-pharmacologic treatment: explanation and elaboration
Annals of Internal Medicine
Framework for the design and evaluation of complex interventions to improve health
British Medical Journal
Designing and evaluating complex interventions to improve health care
British Medical Journal
Developments in cluster randomised trials and Statistics in Medicine
Statistics in Medicine
Towards a Policy Evaluation Service: Developing Infrastructure to Support the Use of Experimental and Quasi-experimental Methods
Lessons for cluster randomized trials in the twenty-first century: a systematic review of trials in primary care
Clinical Trials
Why modelling a complex intervention is an important precursor to trial design: lessons from studying an intervention to reduce falls-related injuries in elderly people
Journal of Health Services Research and Policy
Cited by (1026)
Development and validation of a theory-based questionnaire examining barriers and facilitators to discontinuing long-term benzodiazepine receptor agonist use
2024, Research in Social and Administrative PharmacyIntervention for sleep problems in nursing home residents with dementia: a cluster-randomized study
2024, International PsychogeriatricsExperiences of older vulnerable people with ischemic heart disease and their peer mentors: A qualitative process evaluation
2024, Journal of Advanced NursingEvaluation of the implementation of advanced practice nursing roles in France: A multiple case study
2024, Journal of Advanced Nursing
- ☆
This article was originally published in the BMJ 337 pp. 979–983. The article is republished with permission from the BMJ. It is reproduced as part of a series of classic methods papers. An introductory commentary is available as http://dx.doi.org/10.1016/j.ijnurstu.2012.09.009.