Intended for healthcare professionals

CCBYNC Open access
Analysis

Quality and Outcomes Framework: what have we learnt?

BMJ 2016; 354 doi: https://doi.org/10.1136/bmj.i4060 (Published 04 August 2016) Cite this as: BMJ 2016;354:i4060

Chinese translation

该文章的中文翻译

  1. Martin Roland, professor of health services research1,
  2. Bruce Guthrie, professor of primary care medicine2
  1. 1Institute of Public Health, Cambridge CB2 0SR, UK
  2. 2Population Health Sciences, University of Dundee, Dundee, UK
  1. Correspondence to: M Roland mr108{at}cam.ac.uk

Martin Roland and Bruce Guthrie assess the successes and failures of the pay-for-performance scheme and what its future should be

In 2004 the UK National Health Service introduced the largest health related pay-for- performance scheme in the world—the Quality and Outcomes Framework (QOF).1 However Scotland is now abandoning the scheme, and growing disenchantment in England is likely to lead to major changes. What have we learnt, and what should happen to QOF in future?

Promising start

In the late 1990s, general practitioners’ pay had fallen substantially behind that of specialists, and morale and recruitment in general practice were poor. The government and the British Medical Association (BMA) privately agreed that a large pay rise was needed. Money was available because in 2000 the government had committed to increasing NHS spending to mid-European levels as a percentage of gross domestic product. However, the profession had to give something in return, and the BMA dropped its longstanding opposition to “quality payments” and started to negotiate a pay-for-performance scheme that would substantially increase funding for general practice.

There followed 18 months of negotiations between BMA and NHS Employers with a small number of clinical advisers to develop the outcome measures (indicators) that would form the basis of the scheme (box 1). A starting premise was that the clinical indicators should be based on evidence based guidelines so that they would be likely to command a wide degree of professional support (box 2). The framework also included indicators related to practice organisation and patient participation. The package was controversial, and the BMA allowed its members to vote on the scheme—once in outline and once when the details were known.

Box 1: How the Quality and Outcomes Framework works

  • The original scheme included 76 clinical indicators covering 10 conditions

  • Data on clinical quality were extracted automatically from practice electronic records

  • Doctors could exclude patients from individual clinical indicators (exception reporting) for specified reasons including clinical inappropriateness, intolerance of medication, and patient dissent

  • Organisational indicators included medical records, information for patients, education and training, practice management, and medicines management

  • Patient experience indicators related to conducting and acting on the results of patient experience surveys and offering booked appointments of at least 10 minutes

Box 2: Examples of indicators that attracted broad professional support

  • Percentage of patients aged ≥45 who have a record of blood pressure in the preceding five years

  • Percentage of patients with coronary heart disease in whom the last blood pressure reading (measured in the preceding 12 months) is ≤150/90 mm Hg

  • Percentage of diabetic patients with up-to-date influenza immunisation

  • The government decided that it would be discriminatory to put age limits on the indicators even though most of the available evidence was based on trials that excluded older people. As time went on, and with targets having largely been met, single disease indicators appeared less relevant to the needs of patients, particularly older people with multiple complex problems.

Implementation happened over more than a year. The indicators were available well before the financial rewards were introduced in order to facilitate planning, investment in electronic clinical records with tools for managing chronic disease, the production of detailed guidance for practices, and support for implementation in some parts of the country. Electronic clinical records, which were already well advanced in primary care, became universal because they were needed to obtain payment, though GPs had to employ more administrative staff to collect the required data. QOF accelerated existing trends to shift care for chronic physical conditions to nurse-led clinics, particularly diabetes and cardiovascular and respiratory disease. Practices used the software tools created for QOF payments to monitor their care of patients, with more internal management in practices to ensure they met the targets.2

High payments

With an overall cost of over £1bn (€1.2bn; $1.3bn), QOF proved nearly £300m more expensive than the government had expected in the first year because it underestimated the baseline quality of care. This meant that many practices achieved near maximum performance (and therefore payment) in the first year. Practice income rose rapidly with QOF potentially providing an additional 25% of income, and this certainly reduced professional opposition to the scheme. However, initial rises in income were progressively clawed back over the next 10 years with zero or near zero pay rises such that real terms income in 2013-14 fell back to below that in 2003-04.3

All data from QOF were publicly available, and thus three major innovations were introduced simultaneously: much better data collection, public release of information on quality of care, and pay for performance. It is therefore unclear what effect the first two of these would have had on their own, and the degree to which pay for performance was a quality driver.

Did QOF improve quality of care?

QOF did produce some improvements in quality of care, but this was against a background of a widespread programme of quality improvement in the NHS that included national standards for the major chronic diseases, annual appraisal of all doctors working in the NHS, and widespread use of clinical audits to compare practices, sometimes with public release of data. So for asthma and diabetes, for example, the introduction of QOF was followed by a modest increase in the rate at which care was already improving.4 For the major chronic diseases in QOF, there were also reductions in inequalities in delivery of care, with practices in socioeconomically deprived areas rapidly catching up with the performance of practices in more affluent areas.5 The scheme may have limited the rise in emergency admissions for included conditions, but did not appear to reduce associated mortality.6 7 8

Quality of care for conditions that were not included also continued to improve, but at a slower rate than before the introduction of QOF.9 There were almost certainly negative consequencesfor example, the progressive decline in the ability of patients to see a GP of their choice was probably partly due to a relentless focus by government on incentives for rapid access to care.

Did doctors cheat?

Gaming and manipulation of data are hard to detect, and the planned “light touch” inspections were in practice lax. The government’s concern had been about one aspect of the scheme—the ability of doctors to exclude individual patients from the data (exception reporting) for a range of reasons, including their clinical judgment. This had been important to get professional support for the scheme, but the government saw it as an open invitation to game the system. This proved not to be the case, with only around 5% of patients reported as exceptions, though, as would be expected, rates of exception reporting were lower for simple processes such as measuring blood pressure and higher for more complex processes such as diagnosis and intermediate outcomes.10 11

What went wrong with QOF?

QOF remains one of the largest implementations of healthcare pay-for-performance in the world, and any programme on this scale will experience difficulties. Over the years, there have been several technical problems, one of which is that the original payment formula unintentionally led to larger and more affluent practices getting systematically higher payments than smaller practices for the same level of quality.12 A second major technical flaw was that payments based on responses to a national patient survey were subject to random variation such that practices could improve care from one year to the next but actually receive less money.13

Problems also occurred with some indicators after implementation. For example, the codes used to define diabetes registers were changed to include only records that stated the type of diabetes. Although intended to improve the quality of registers, some people with less specific diabetes codes effectively vanished from practices’ QOF registers, and these people may subsequently have received worse care.14 Many of these problems could have been avoided by better testing of indicators before implementation, which eventually happened when the National Institute for Health and Care Excellence (NICE) took over development of indicators in 2009.

Although the initial indicators largely related to aspects of care that GPs already thought important, the alignment of indicators with professional values reduced over time. This was partly because the easy targets had already been met and new indicators were introduced that were evidence based but had only marginal gains despite high workload. In addition, an increasing proportion of QOF was taken up with indicators that met a managerial or policy agenda rather than a clinical one (table). There was also concern that the needs of the increasing population of older people with multiple complex problems were poorly served by indicators that focused exclusively on single diseases.

Examples of indicators that went wrong

View this table:

The maximum percentage of practice income linked to quality indicators was reduced from 25% to 15% in 2013 because of perceptions that the higher rate distorted clinical practice. More radically, GPs in one English district (Somerset) negotiated a complete alternative to QOF in 2015 and Scotland dropped QOF in 2016 in favour of a quality improvement scheme based on local “quality circles.”

The unpopularity of QOF among professionals has undoubtedly been increased by the administrative burden it produces at a time when GP workloads have been increasing, general practice has been receiving a declining share of the NHS budget, and work stress is higher than at any time in the last 15 years.15

So what should happen now?

There is general agreement that it is a professional responsibility to maintain and improve quality of care but less agreement on how this should be done. Although there is consensus that QOF requires substantial change, evidence is conflicting on whether quality declines when pay-for-performance incentives are removed.16 17 It would therefore be prudent to require some limited ongoing data collection to avoid serious adverse consequences of withdrawing financial incentives. There is also little evidence on what would work better, although the steady improvement in quality of care in the decade preceding QOF suggests that the development and implementation of guidelines and standards encouraged by local clinical audit is effective. The NHS in Scotland has chosen this approach, replacing QOF with “quality circles” implemented through clusters of 10-15 practices working collaboratively to identify and develop relevant improvement work. There is funding to release GPs from practices but no centrally created targets or financial incentives. Similar work is ongoing in Wales.

The successes of QOF included an acceleration of previous trends towards systematic management of chronic disease by multidisciplinary teams and widespread introduction of electronic medical records. However, quality and safety improvement require multiple strategies, sustained over time. Winning hearts and minds through persuasion, collaboration, and close alignment of professional and managerial agendas is at least as important as the more technical elements of any individual quality improvement initiative. QOF (and pay for performance more generally) was not a magic bullet to improve quality and reduce variation, but neither will its replacements be. It remains to be seen which of the divergent approaches being taken by the NHS in England, Scotland, Wales, and Northern Ireland is most successful.

Key messages

  • The Quality and Outcomes Framework accelerated previous trends towards widespread use of electronic medical records and multidisciplinary management of chronic diseases

  • QOF resulted in relatively limited additional improvements in quality but reduced socioeconomic inequalities in delivery of care

  • Several indicators were withdrawn because they lacked professional support or there were problems with implementation

  • New strategies are needed to continue improvements in quality of care

Footnotes

  • Contributors and sources: Both authors have provided advice on the development of QOF and both contributed to the development and authorship of this article.

  • We have read and understood BMJ policy on declaration of interests and declare the following interests: MR advised the BMA and NHS Employers on the development of the Quality and Outcomes Framework from 2001 to 2003. BG was a member of the NICE Quality and Outcomes Framework indicators advisory committee and chaired the methods, retirement thresholds, and review subcommittee from 2009 to 2014.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References

View Abstract