Developing and evaluating complex interventions: The new Medical Research Council guidance

https://doi.org/10.1016/j.ijnurstu.2012.09.010Get rights and content

Section snippets

Revisiting the 2000 MRC framework

As experience of evaluating complex interventions has accumulated since the 2000 framework was published, interest in the methodology has also grown. Several recent papers have identified limitations in the framework, recommending, for example, greater attention to early phase piloting and development work (Hardeman et al., 2005), a less linear model of evaluation process (Campbell et al., 2007a), integration of process and outcome evaluation (Oakley et al., 2006), recognition that complex

What are complex interventions?

Complex interventions are usually described as interventions that contain several interacting components, but they have other characteristics that evaluators should take into account (Box 1). There is no sharp boundary between simple and complex interventions. Few interventions are truly simple, but the number of components and range of effects may vary widely. Some highly complex interventions, such as the Sure Start intervention to support families with young children in deprived communities (

Development, evaluation, and implementation

The 2000 framework characterised the process of development through to implementation of a complex intervention in terms of the phases of drug development. Although it is useful to think in terms of phases, in practice these may not follow a linear or even a cyclical sequence (Fig. 1) (Campbell et al., 2007a).

Best practice is to develop interventions systematically, using the best available evidence and appropriate theory, then to test them using a carefully phased approach, starting with a

Developing a complex intervention

Identifying existing evidence—Before a substantial evaluation is undertaken, the intervention must be developed to the point where it can reasonably be expected to have a worthwhile effect. The first step is to identify what is already known about similar interventions and the methods that have been used to evaluate them. If there is no recent, high quality recent systematic review of the relevant evidence, one should be conducted and updated as the evaluation proceeds.

Identifying and

Assessing feasibility

Evaluations are often undermined by problems of acceptability, compliance, delivery of the intervention, recruitment and retention, and smaller than expected effect sizes that could have been predicted by thorough piloting (Eldridge et al., 2004). A feasibility study for an evaluation of an adolescent sexual health intervention in rural Zimbabwe found that the planned classroom based programme was inappropriate, given cultural norms, teaching styles, and relationships between teachers and

Evaluating a complex intervention

There are many study designs to choose from, and different designs suit different questions and circumstances. Researchers should beware of blanket statements about what designs are suitable for what kind of intervention and choose on the basis of specific characteristics of the study, such as expected effect size and likelihood of selection or allocation bias. Awareness of the whole range of experimental and non-experimental approaches should lead to more appropriate methodological choices.

Conclusions

We recognise that many issues surrounding evaluation of complex interventions are still debated, that methods will continue to develop, and that practical applications will be found for some of the newer theories. We do not intend the revised guidance to be prescriptive but to help researchers, funders, and other decision-makers to make appropriate methodological and practical choices. We have primarily aimed our messages at researchers, but publishers, funders, and commissioners of research

Contributors

PD had the idea of revising and updating the MRC framework. It was further developed at a workshop co-convened with Sally Macintyre and Janet Darbyshire, and organised with the help of Linda Morris on 15–16 May 2006. Workshop participants and others with an interest in the evaluation of complex interventions were invited to comment on a draft of the revised guidance, which was also reviewed by members of the MRC Health Services and Public Health Research Board and MRC Methodology Research

Funding

MRC Health Services and Public Health Research Board and the MRC Population Health Sciences Research Network.

Competing interests

None declared.

Provenance and peer review

Not commissioned; externally peer reviewed.

First page preview

First page preview
Click to open first page preview

References (36)

  • N. Black

    Why we need observational studies to evaluate the effectiveness of health care

    British Medical Journal

    (1996)
  • I. Boutron et al.

    Extending the CONSORT statement to randomized trials of non-pharmacologic treatment: explanation and elaboration

    Annals of Internal Medicine

    (2008)
  • M. Campbell et al.

    Framework for the design and evaluation of complex interventions to improve health

    British Medical Journal

    (2000)
  • N.C. Campbell et al.

    Designing and evaluating complex interventions to improve health care

    British Medical Journal

    (2007)
  • M. Campbell et al.

    Developments in cluster randomised trials and Statistics in Medicine

    Statistics in Medicine

    (2007)
  • C. Creegan et al.

    Towards a Policy Evaluation Service: Developing Infrastructure to Support the Use of Experimental and Quasi-experimental Methods

    (2007)
  • S. Eldridge et al.

    Lessons for cluster randomized trials in the twenty-first century: a systematic review of trials in primary care

    Clinical Trials

    (2004)
  • S. Eldridge et al.

    Why modelling a complex intervention is an important precursor to trial design: lessons from studying an intervention to reduce falls-related injuries in elderly people

    Journal of Health Services Research and Policy

    (2005)
  • Cited by (1026)

    View all citing articles on Scopus

    This article was originally published in the BMJ 337 pp. 979–983. The article is republished with permission from the BMJ. It is reproduced as part of a series of classic methods papers. An introductory commentary is available as http://dx.doi.org/10.1016/j.ijnurstu.2012.09.009.

    View full text