Skip to main content
  • Research article
  • Open access
  • Published:

Health outcomes measurement and organizational readiness support quality improvement: a systematic review

Abstract

Background

Using outcome measures to advance healthcare continues to be of widespread interest. The goal is to summarize the results of studies which use outcome measures from clinical registries to implement and monitor QI initiatives. The second objective is to identify a) facilitators and/or barriers that contribute to the realization of QI efforts, and b) how outcomes are being used as a catalyst to change outcomes over time.

Methods

We searched the PubMed, EMBASE and Cochrane databases for relevant articles published between January 1995 and March 2017. We used a standardized data abstraction form. Studies were included when the following three criteria were fulfilled: 1) they relied on structural data collection, 2) when a structural and comprehensive QI intervention had been implemented and evaluated, and 3) impact on improving clinical and/or patient-reported outcomes was described. Data on QI strategies, QI initiatives and the impact on outcomes was extracted using standardized assessment tools.

Results

We included 21 articles, of which eight showed statistically significant improvements on outcomes using data from clinical registries. Out of these eight studies, the Chronic Care Model, IT application as feedback, benchmarking and the Collaborative Care Model were used as QI methods. Encouraging trends in realizing improved outcomes through QI initiatives were observed, ranging from improving teamwork, implementation of clinical guidelines, implementation of physician alerts and development of a decision support system. Facilitators for implementing QI initiatives included a high quality database, audits, frequent reporting and feedback, patient involvement, communication, standardization, engagement, and leadership.

Conclusion

This review suggests that outcomes collected in clinical registries are supportive to realize QI initiatives. Organizational readiness and an active approach are key in achieving improved outcomes.

Peer Review reports

Background

The use of clinical registries is considered crucial to systematically measure clinical outcomes in achieving better value for patients [1]. A clinical or patient registry is defined as “an organized system that uses observational study methods to collect uniform data (clinical data as structure, process and outcome measures) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure” [2]. Registries that are used for evaluating patient outcomes are used for the purpose of this review. The importance of clinical registries has been widely recognized as a tool to realize quality improvement (QI) and public accountability [1, 3,4,5,6,7,8]. Medical associations use clinical registries for collecting data using pre-defined measures in patients undergoing a certain procedure or for a specific disease [9]. In particular, feedback based on clinical registry data is used to identify and monitor improvement initiatives [10]. Therefore, registries are seen as a promising tool to achieve improvements in value for the patient by measuring outcomes [1]. A previous review on the structure, use and limitations of current clinical registries showed that registries and their respective measures are used for monitoring the work of health care providers, discussion platforms for QI, improving risk adjustment modelling and for improving preoperative risk profiling [11]. However, the current body of literature lacks insights into the extent to which the use of outcome measures from clinical registries, either when identifying, selecting or monitoring QI initiatives, can impact health outcomes.

With rising healthcare costs, service restrictions, differences in quality and costs, there is an increasing need for reform to improve value of healthcare [12]. Value in healthcare is defined as outcomes relative to costs [13]. Value-based health care aims at achieving higher value for patients while ensuring sustainability of the healthcare system by an efficient and effective delivery of care [14]. This goal is assumed to be achieved by measuring and using outcomes per medical condition for the identification of improvement potential across the full cycle of care [12]. Higher value for patients by measuring outcomes is one of the potential methods for improving quality of healthcare relative to the costs spent. For the purposes of this review, we only focused on outcome measures and not on the respective costs.

Quality of healthcare is generally assessed by using structure, process or outcome measures [15]. The latter provide insights into outcomes of a certain disease or several diseases, for instance on survival, functional status, and quality of life [16]. The aim of measuring outcomes is diverse; guiding clinical decision-making, initiating improvement interventions, benchmarking, monitoring, scientific research and public accountability. Measuring outcomes structurally and using them to identify possible improvements contributes to the aim of achieving higher value for patients [17].

The goal is to summarize the results of studies which use outcome measures from clinical registries to implement and monitor QI initiatives. For the purpose of this study, QI was defined as the application of a defined improvement process to achieve measurable improvement by implementing an improvement intervention. Registry data itself is not sufficient as they need QI methods in order to achieve actual improvement. The second objective is to identify a) facilitators and/or barriers that contribute to the realization of QI efforts, and b) how outcomes are being used as a catalyst to change outcomes over time.

Methods

A systematic review was conducted of studies published between January 1995 and March 2017. The search strategy was designed for PubMed, EMBASE and Cochrane databases. To identify evidence for the use of clinical registries to improve or contribute to patient health outcomes, the following PubMed Mesh terms were used to identify studies: mortality, patient outcome assessment and treatment outcome. These terms were combined with a variety of search terms related to QI and diverse disease specific registry studies. No specific patient group or study design was defined. Details of the complete search strategy are provided in the online supplementary content (Additional file 1: Appendix 1). Additional hand-searching has been conducted for systematic reviews on the subject during the review process. The hand-search was conducted in Google Scholar.

Inclusion and exclusion criteria

Studies were included when they met each of the following criteria: 1) published in peer-reviewed journals, 2) published in English, French or German, 3) the study actively implemented a strategy using outcome data to realize QI, 4) the study relied on structural data collection, and 5) the study evaluated the QI interventions realized. Whether a study made use of a QI effort, falling under criteria 3 and 5, was evaluated after reviewing the full text papers and was therefore not part of the search string. After title screening, included studies were evaluated on criteria 3 and 5. Studies were excluded when they analyzed the effect of new intervention(s) on outcomes (testing drugs, new techniques or the effect of an intervention) or when the data had solely been collected to evaluate an intervention in a clinical trial.

Data extraction and quality assessment

For the initial selection each reviewer reviewed a random set on first title, second abstract, and finally full text to determine eligibility. The full text articles were critically reviewed and judged by all reviewers. Any disagreement between reviewers was discussed by the full review team until consensus was achieved. The selected articles were evaluated using a standardized predesigned form listing whether the inclusion criteria were met.

A thorough review process was carried out for the data quality assessment, which consisted of the following three steps:

Step 1: Data abstraction

The Cochrane data abstraction form for intervention reviews (RCTs and non-RCTs) was used as a tool to extract data on study design and methodological quality (Additional file 1: Appendix 2) [18]. Furthermore, data on the target group, main results, main outcome measures, data source, geographical setting and funding sources was abstracted.

Step 2: Rigor of QI intervention

The included studies were evaluated using the Quality Improvement Minimum Quality Criteria Set (QI-MQCS) as a critical appraisal instrument, developed by the RAND Corporation (Additional file 1: Appendix 3) [19]. The QI-MQCS contains 16 domains to evaluate the QI intervention, resulting in a scoring system to evaluate whether this domain was met or not. The QI-MQSC did not introduce a threshold concerning acceptability of the quality of the papers. Therefore, we agreed on the following criteria in order to adequately interpret the QI-MQSC score. The study was considered to be of perfect quality (> 15 items ranked yes), good quality (> 12 items ranked yes), moderate quality (> 9 items ranked yes) and insufficient quality (≤9 items ranked yes).

Step 3: Rigor of data collection and analysis

In addition to the QI-MQCS, 13 items were added for further evaluation. Two questions (item 2 and 18) from the Downs & Black (1998) criteria were used to reflect on whether the main outcomes to be measured had been clearly described in the introduction or methods section and whether the statistical tests used to assess the main outcomes were appropriate [20]. In addition, three questions (item 10c, 11a and 11b) from the SQUIRE guidelines were used: 1) whether a method was employed for assessing completeness and accuracy of data, 2) whether quantitative methods were used to draw inferences from the data and 3) whether methods were applied for understanding variation within the data, including the effects of time as a variable [21]. Furthermore, it was evaluated how the included studies dealt with missing values, whether they performed audits, reported on secular trends, performed case-mix adjustments, whether clear inclusion and exclusion criteria had been defined for the patient population and when possible whether a power analysis was conducted.

In conclusion, the Cochrane data abstraction form was used to abstract data from the selected articles in order to identify changes in outcomes and facilitators. Data synthesis was guided by 1) the QI-MQCS results, 2) the merged and modified version of the Downs & Black (1998), SQUIRE guidelines, and self-developed questions. Due to the diversity of outcomes, a pooled effect of the results was not conducted.

Results

Search results and included studies

The final systematic search resulted in 11 524 records for initial screening; 117 articles were included to review the full text version of which 96 studies were excluded because they did not meet the inclusion criteria (Fig. 1) [22]. One additional article was included from a relevant systematic review, which emerged from hand-searching [23, 24]. Table 1 presents the characteristics of the 21 included studies. The studies focused on registries for the following patient groups; patients with diabetes [24,25,26,27,28,29,30,31], children with chronic conditions [32], patients with lung cancer [33, 34], patients with cystic fibrosis [35,36,37], patients with cardiac anomalies [38], patients undergoing cardiac surgery [39,40,41], patients with acute myocardial infarction [42], and patients referred for home health services [43]. The majority of the registries presented voluntary participation [25,26,27, 29,30,31, 35, 36, 38, 40,41,42,43]. Three registries required mandatory participation [28, 33, 34]. Most of the presented registries had the purpose of achieving QI [24, 25, 28,29,30,31,32,33,34, 37, 39, 41,42,43]. The remaining studies have introduced their clinical registry for research and educational purposes [26, 27, 35, 36, 38, 40, 44].

Fig. 1
figure 1

Flow diagram. Source: Authors’ analysis, format source from PRISMA [22]. a Exclusion criteria: 1. Studies published in peer-reviewed journals; 2. Studies published in English; 3. Did not actively implement a strategy making use of outcome data to realize quality improvement; 3. Did not relay on structural data collection; 5. Did not evaluate quality improvement interventions using data from outcome registries

Table 1 Characteristics of Included Studies (n = 21)

Impact of quality improvement

Eight studies showed statistically significant improvement in outcomes resulting from the implementation of QI initiatives [25, 27, 29, 31, 33, 34, 42, 44]. Statistically significant improvements were achieved in long-term survival [33, 34], mortality [42], readmission rate [42], bleeding complications [42], systolic blood pressure [27], HbA1C [27, 29], LDL [27, 29], exercise habits [25], depression improved in the acute phase (PHQ-9 score) [44], and hospitalization with ambulatory care-sensitive conditions [31]. The remaining studies did not show statistically significant improvements. All included studies presented outcome measures for their respective improvement work, five of which also measured additional process measures [27,28,29,30,31,32,33, 35, 41, 42, 44]. Table 2 presents outcomes measures used, QI methods applied and whether statistically significant improvement of outcome measures was achieved. A detailed overview of the significance of outcome measures can be found in the online supplementary content (Additional file 1: Appendix 4). None of the studies identified an impact on patient value or evaluated the impact on costs of care.

Table 2 Improvement in outcomes and/or processes

Quality of the studies

Rigor of quality improvement interventions

The overall quality of included articles was moderate (see Tables 3 and 4). On the 16 domains of the QI-MQCS four articles achieved a score of 13, which is the highest score among included studies [24, 26, 32, 37]. These articles are therefore considered to be of good quality. Four articles were ranked as moderate quality with a score of 12 [35, 39, 42, 44]. Five articles scored poorly on the QI-MQCS with a score ≤ 7, which is ranked as low quality [31, 33, 34, 38, 41].

Table 3 Scoring of the RAND QI-MQCS
Table 4 Scoring of the RAND QI-MQCS

Rigor of data collection and analysis

The overall results of the quality assessment on data collection and analyses are displayed in the online supplementary content (Additional file 1: Appendix 5). Four studies have applied generalized linear mixed models for the analysis of change in outcomes [25, 27, 36, 42]. One study used a generalized estimating equation model with repeated measurements [24]. Inferential statistics have also been used in the form of survival analyses, logistic regression and chi-square analyses [29, 31, 33, 39, 44]. The remaining studies made use of descriptive statistical analyses only [26, 30, 32, 38, 43]. In order to monitor change, run charts have been applied in five studies [28, 35, 37, 40, 41].

On the additional item criteria, two studies have applied methods to account for missing values in their data, while also conducting a power analysis [25, 27].

Methods used to achieve improvements

We identified six methods to achieve QI: benchmarking [33, 34, 38,39,40,41], a collaborative care model [26, 28, 42, 44], Plan-Do-Check-Act [36, 37], the Chronic Care Model [25, 32, 37], Learning and Leadership Collaborative [35] and IT driven interventions [24, 27, 29, 30, 41]. There were some studies where no clear QI method was used [29, 31, 43]. We will discuss these methods in the following paragraphs.

Benchmarking

Benchmarking has been applied in several of the included studies [33, 34, 38, 39, 41]. Data was mostly compared among different hospitals [33, 34, 38]. Annual publication of data in the form of reports has most commonly been applied to report on results [33, 34, 41].

One study complemented their national report with an additional disease specific report with supplementary measures [33]. Another method of benchmarking was a discussion of the results at a (monthly or annual) meeting. During the annual meeting, results from reports were discussed and further evaluated [38]. Also, short-term feedback cycles with monthly publication of reports were applied [39]. The use of a strong data-driven system in combination with audits was characteristic of initiatives that applied benchmarking in order to improve outcomes as well as a model to change practice [33, 34, 39, 40].

Collaborative care model

Three studies applied the Breakthrough Collaborative Model (BCM) to structure the goal of improving outcomes [26, 28, 42]. One study applied a Web-based disease registry to track patients with symptoms of depression to support treatment management in primary care [44]. In addition, evidence-based depression management training was provided to primary care providers. Moreover, in all sites, most patients experienced meaningful improvement in depression.

The BCM was used to design a cycle of structured discussion sessions during which outcomes were analyzed, presented and variation in work processes were discussed [26, 28]. The model was furthermore used as a guide to facilitate improvement efforts and insights into data [26, 42].

Plan-do-check-act

In two studies Plan-Do-Check-Act (PDCA) cycles were used to improve outcomes and/or processes [26, 36, 37]. Yet, the cycle was presented as a supporting tool to other methods, either for the application of the BCM [26] or for benchmarking [36]. For the latter it was applied as a method to prepare for national benchmarking by organizing three PDCA cycles before data was shared [36]. The method was applied by organizing multidisciplinary meetings, where outcomes were discussed and improvement initiatives were identified [36]. Three cycles were organized in order to prepare public benchmarking after the third cycle [36].

The other study, which primarily used the methods outlined for the BCM, used the PDCA to structure and evaluate the learning sessions [26]. However, it was not the primary method for improving outcomes. In another study PDCA was used to continually evaluate local cystic fibrosis care practices, and they were able to improve pulmonary function and nutritional outcomes [37].

The chronic care model

Three studies applied the Chronic Care Model (CCM) [25, 32, 37]. One study that applied the CCM used supporting techniques such as: audit and feedback, electronic registry, clinician reminders, patient reminders, and abbreviated patient education. It is, thus, rather a framework offering practical tools [25]. They did not find expected improvements in outcomes. Here, authors suggested that another, more collaborative approach would be needed to improve outcomes of chronic diseases [25]. The second study applied the CCM in children with various chronic conditions, in combination with PDCA cycles, failure mode and effect analysis and Pareto charts of failures [32]. This study resulted in improvement of respective outcomes [32]. The third study applied the CCM to ensure that all aspects of cystic fibrosis management were covered, and combined this with the PDCA to continually evaluate the processes of best practices in cystic fibrosis care. They did not evaluate the effectiveness of applying the CCM.

Learning and leadership collaborative

The Learning and Leadership Collaborative (LLC) was applied in one study [35]. Commitment of a team to participate in a QI program, developing a sense of common responsibility as an organization for the improvement, measuring outcomes and processes and patient involvement were defined as key ingredients for QI. LLC has been used for training staff towards structured discussions on outcomes and/or processes and the introduction of a patient registry [35]. Data was registered and analyzed at one particular hospital, but presented to all participating hospitals. Participation in the LLC has led to the initiation of an improvement initiative at the hospital where the data were registered and analyzed.

IT application as feedback tool

Five studies made used of (self-developed) IT applications, to empower patients and/or physicians to manage patients with greater care. The studies aimed at linking administrative and key clinical data and made use of reminder functions [24, 27, 30]. One study concluded their patients received better overall coordination of care [30]. Another two studies reported significant improvements in the percentage of type 2 diabetic patients and at-risk populations utilizing diabetes registries achieving recommended values for SBP, LDL, and HbA1C [27]. In one study, data were in addition displayed in operating room theatre, surgical office suites and nursing units [41]. Another study reported improved adherence to diabetes care processes in a continuity clinic due to the registry-generated audit, feedback, and patient reminders [24].

Facilitators for quality improvements

A noticeable facilitator leading to QI was frequent reporting and feedback either annually or even monthly [28, 33, 34, 38,39,40,41]. The use of a database with high quality data, audits and reports as well as a strong stakeholder involvement were also found to be important factors contributing to successful QI [33, 34]. Structured registry data and an improvement intervention that can be linked to outcomes led to improvement in respective outcome measures [42]. In addition, other factors mentioned that would be needed for successful QI in one or more of the included studies are (1) patient involvement, communication, and standardization; (2) attitude and enthusiastic commitment from physicians, clinical managers and central administration and (3) appreciation concerning the importance of measurements [28, 35, 40, 41]. Moreover, improvement in outcomes appeared to be successful if supported by a proven QI approach [42]. Inconsistencies were found regarding the importance of involving an expert in the field of QI. On the one hand, involvement of a QI expert was considered positive for the start of an improvement agenda as it contributed to a more rapid implementation of improvement initiatives [42]. On the other hand, involving no additional expert or formal team was not experienced as a contributing factor to the success of outcome improvement [26]. This was only possible because a structured data registry was already present [26].

Catalyst to improve outcomes over time

Outcomes can be improved over time through systematic use of outcome registries and facilitators. Outcome data and its interpretation help to achieve improvements in outcomes over time even faster compared to studies that did not use outcome data [34]. It was stated that outcomes were not only used to identify possible improvement interventions but also to monitor and secure improvements in the long run [34].

A computerized system was presented as a success factor to accelerate data from clinical registries to change outcomes and/or processes [26,27,28,29, 31,32,33,34,35,36, 42, 45]. Such a computerized system ensured valid and timely results [33]. Moreover, it allows for real-time feedback, which, in turn, leads to faster identification of improvement areas [28, 29, 31, 42].

Further use of outcome data for outcome improvement included the development of checklists, improved use of diagnostic standards, creation of data transparency, guidelines, improved patient recall, patient empowerment and leadership towards improvement [28, 29, 31, 36].

Discussion

Eight out of the 21 included studies reported statistically significant improvements in outcomes including long-term survival, mortality, readmission rate, bleeding complications, systolic blood pressure, HbA1C, LDL, exercise habits (FEV1), depression improved in the acute phase (PHQ-9 score) and hospitalization with ambulatory care-sensitive conditions resulting from the implementation of QI initiatives. Out of these eight studies, the Chronic Care Model, IT application as feedback, benchmarking and the Collaborative Care Model were used as QI methods. A diverse set of clinical outcomes were collected and no patient-reported outcome measures (PROMs) were applied in any of the studies. Yet, only one study that reported statistically significant improvements in outcomes was of good quality. The improvement interventions were diverse, ranging from the implementation of guidelines, development of physician/patient alerts, improved teamwork, patient engagement methods through IT applications and the development of a supportive decision system. Many improvement interventions were combined in order to build a multifaceted approach to QI [24, 27, 28, 32, 37, 42, 44]. Facilitators for realizing QI include a high quality database, the use of pre-defined outcome measures, audits, frequent reporting and feedback, patient involvement, improved communication and standardization. Systematic approaches were used for structuring the improvement cycle. In order to use data from clinical registries as a catalyst to change outcomes, this review suggests that having a strong computerized system is supportive in aiding frontline clinical process management and improvement work.

A facilitator identified in this review was the organization of discussions for mapping and selecting best practices. It was further shown that a sound data management has a catalyzing effect. This data can be aggregated in annual reports, while it can also be used to compare with peers and/or perform nationwide comparisons. Also, a registry can facilitate access to real-time outcome and process data which can engage the team in realizing active improvements. Other registry programs such as the Get With The Guidelines-Stroke study, a large registry and performance improvement program for hospitalized patients with stroke and transient ischemic attack, also use annual reports for benchmark and feedback purposes [46].

Other systematic reviews concluded that audit and feedback can lead to small but important improvements in professional practice and healthcare outcomes [47]. They furthermore concluded that the effectiveness of audit and feedback depends on how the feedback was provided as well as on baseline performance. In addition, comparing this review to ours, there was one paper we have both included [24]. However, the objectives are very different, which can explain that there was not more overlap in included studies.

In addition, barriers and success factors to the effectiveness of feedback have been identified [48]. However, the authors were not able to draw sound conclusions on the effect of feedback on the quality of care and its potential to improve outcomes. Another review concerning renal registry data reflected on the potential of registry data and help advancing the nephrology care delivery [49].

None of the reviews studied the effect of QI efforts, besides from audit and feedback, on the quality of care and outcomes. This is the first study for which the literature was searched in detail in order to identify barriers and facilitators supporting QI interventions based on information from clinical registries.

The use of clinical registries can be seen as an important tool in order to systematically measure clinical outcomes and to achieve the goals of value-based health care. This is not only in line with our conclusions, but also acknowledged by others [1, 50, 51]. Other data sources can also be valuable for QI efforts, such as data from randomized controlled trials. However, this review aimed at including studies where structural data was collected through the use of a clinical registry.

In order to improve value, measuring both one or more outcomes and costs is essential [17]. Working with international registries makes it possible to make global comparisons, for example identifying practice variations and therefore improving quality of care for the whole patient group [52].

Implications

We did not observe many efforts to incorporate PROMs. It is, however, generally considered important to measure the impact on health related quality of life (HRQoL) in the evaluation of the effect of QI initiatives [53]. The studies included for this review did not reflect on why they did not use PROMs and what would be the added value if they did. Even so, one study does report however the start of measuring quality of life in patients with cystic fibrosis [36]. The authors report this will lead to more insights into the complexity of QI efforts and personal patient gains in the experienced quality of life. It will also enable reporting on to what extent value was created from the patient’s perspective. Future QI efforts very likely combine QI with benchmarking incorporating quality of life outcomes.

None of the included studies reported costs, causing our study to be unable to evaluate the true impact on value. Incorporating costs will enable to identify cost drivers and comparing improvement interventions as proposed by the value-based healthcare principles [50]. A recent study showed that surgery for the oldest patients with colorectal cancer did not lead to increased hospital costs [51]. However, this study did identify variation in cost driver distribution. Patients under 85 years old had lower costs looking at the ward, operation and intensive care unit. Therefore, identifying costs and its main drivers will enable to develop improvement programs for specific sub-groups. This might be a powerful tool to reduce e.g. complications and thus hospital costs. Value-based health care could be the overarching concept guiding improvement initiatives, combined with well-defined methods. However, the field lacks a clear guide on implementation examples. Studies reflecting on impact, outcomes and costs are needed. Finally, the standardization of outcome measures is key, although they should be defined for a specific patient population. Transparent measurement of outcomes and costs has the potential to improving the value of care for all patients. Both providers, patients and payers can benefit from this collective common goal of transparency.

Limitations

This review has some inherent limitations. Firstly, due to the very heterogeneous types of QI programs and their respective patient groups, it is difficult to generalize the results achieved in the included studies. Moreover, our inclusion criteria for QI programs may be to some extent arbitrary, which could possible lead to a bias in inclusion or exclusion of studies.

Also, the context in which the clinical registry is organized can impact outcomes. Moreover, important differences were observed in e.g. whether the registry was linked to reimbursement or public reporting versus primarily initiated for scientific or QI purposes or whether it was a voluntary or mandatory registry.

Secondly, the studies included in this review mainly focused on experiences in non-communicable diseases and thus often chronic patient groups. However, our aim was not to exclude communicable diseases from the study but we did not identify any studies in our literature search. This could indicate that chronic patient groups benefitted most from the realization of registries and respective QI interventions. As a result, improvement projects concerning other (non-chronic) patient groups have not been included in this review. Thirdly, due to publication bias, studies reporting no effect will be very likely not published and therefore missed out. Finally, two studies randomized practices [25, 27]. One study randomly allocated 19 volunteering hospitals to 1 of 2 intervention groups, where the intervention differed both in design and intensity [42]. In the other studies it should be noted that complete randomization was not possible, since the intervention hospitals involved were e.g. volunteering. Therefore, these hospitals might differ in their willingness to improve, causing potential selection bias.

Conclusion

The results from this evaluation of studies which use outcome measures from clinical registry data to implement and monitor QI initiatives may help policy makers, managers and clinicians to understand the effectiveness, practicality and challenges of implementing QI interventions. An active and systematic approach is needed to improve outcomes. Continuous feedback from the data linked to clinical practice is crucial. Our review indicates that successful QI and consequently improved outcomes, is dependent on an active approach and organizational readiness.

There are many QI methods, and the majority of improvement interventions contain a combination of several methods. Clinical registries can be seen as supportive instruments in the process of improving quality of care. However, a clinical registry can only be successful in realizing QI efforts when there is commitment and leadership at both the physician and manager level, as well as a benchmarking facility, a well-integrated computerized system, and a collective aim to identify best practices.

Abbreviations

BCM:

the Breakthrough Collaborative Model

CCM:

Chronic Care Model

FEV1:

Forced expiratory volume in one second

HbA1C:

Glycated Hemoglobin

HRQoL:

Health related quality of life

LDL:

Low-density lipoprotein

LLC:

Learning and Leadership Collaborative

PDCA:

Plan-Do-Check-Act

PROMs:

Patient-reported outcome measures

QI:

Quality Improvement

QI-MQSC:

Quality Improvement Minimum Quality Criteria Set

SQUIRE:

Standards for Quality Improvement Reporting Excellence

References

  1. Larsson S, Lawyer P. Improving health care value - the case for disease registries; 2011.

    Google Scholar 

  2. Gliklich RE, Dreyer NA, Leavy MB. Registries for evaluating patient outcomes: a user’s guide. Rockville (MD): Government Printing Office; 2014.

  3. Gitt AK, Bueno H, Danchin N, Fox K, Hochadel M, Kearney P, Maggioni AP, Opolski G, Seabra-Gomes R, Weidinger F. The role of cardiac registries in evidence-based medicine. Eur Heart J. 2010;31(5):525–9.

    Article  Google Scholar 

  4. Hickey GL, Grant SW, Cosgriff R, Dimarakis I, Pagano D, Kappetein AP, Bridgewater B. Clinical registries: governance, management, analysis and applications. Eur J Cardiothorac Surg. 2013;44(4):605–14.

    Article  Google Scholar 

  5. Black N. High-quality clinical databases: breaking down barriers. Lancet. 1999;353(9160):1205–6.

    Article  CAS  Google Scholar 

  6. Pryor DB, Califf RM, Harrell FE, Hlatky MA, Lee KL, Mark DB, Rosati RA. Clinical data bases: accomplishments and unrealized potential. Med Care. 1985;23(5):623–47.

    Article  CAS  Google Scholar 

  7. Aljurf M, Rizzo J, Mohty M, Hussain F, Madrigal A, Pasquini M, Passweg J, Chaudhri N, Ghavamzadeh A, Solh H. Challenges and opportunities for HSCT outcome registries: perspective from international HSCT registries experts. Bone Marrow Transplant. 2014;49(8)1016–21.

  8. Bhatt DL, Drozda JP, Shahian DM, Chan PS, Fonarow GC, Heidenreich PA, Jacobs JP, Masoudi FA, Peterson ED, Welke KF. ACC/AHA/STS statement on the future of registries and the performance measurement Enterprise: a report of the American College of Cardiology/American Heart Association task force on performance measures and the Society of Thoracic Surgeons. J Am Coll Cardiol. 2015;66(20):2230–45.

    Article  Google Scholar 

  9. Evans SM, Scott IA, Johnson NP, Cameron PA, McNeil JJ. Development of clinical-quality registries in Australia: the way forward. Med J Aust. 2011;194(7):360–3.

    PubMed  Google Scholar 

  10. McNeil JJ, Evans SM, Johnson NP, Cameron PA. Clinical-quality registries: their role in quality improvement. Med J Aust. 2010;192(5):244–5.

    PubMed  Google Scholar 

  11. Stey AM, Russell MM, Ko CY, Sacks GD, Dawes AJ, Gibbons MM. Clinical registries and quality measurement in surgery: a systematic review. Surgery. 2015;157(2):381–95.

    Article  Google Scholar 

  12. Porter ME. Value-based health care delivery. Ann Surg. 2008;248(4):503–9.

    PubMed  Google Scholar 

  13. Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477–248.

    Article  CAS  Google Scholar 

  14. Erekson EA, Iglesia CB. Improving patient outcomes in gynecology: the role of large data registries and big data analytics. J Minim Invasive Gynecol. 2015;22(7):1124–9.

    Article  Google Scholar 

  15. Donabedian A. Evaluating the quality of medical care. Milbank Q. 2005;83(4):691–729.

    Article  Google Scholar 

  16. Hedges LV, Vevea JL. Fixed-and random-effects models in meta-analysis. Psychol Methods. 1998;3(4):486.

    Article  Google Scholar 

  17. Porter ME, Pabo EA, Lee TH. Redesigning primary care: a strategic vision to improve value by organizing around patients' needs. Health Aff (Millwood). 2013;32(3):516–25.

    Article  Google Scholar 

  18. Cochrane Training. Data collection forms for intervention reviews. https://dplp.cochrane.org/data-extraction-forms. Accessed 2 Mar 2017.

  19. Hempel S, Shekelle PG, Liu JL, Sherwood Danz M, Foy R, Lim YW, Motala A, Rubenstein LV. Development of the quality improvement minimum quality criteria set (QI-MQCS): a tool for critical appraisal of quality improvement intervention publications. BMJ Qual Saf. 2015;24(12):796–804.

    Article  Google Scholar 

  20. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84.

    Article  CAS  Google Scholar 

  21. Ogrinc G, Mooney SE, Estrada C, Foster T, Goldmann D, Hall LW, Huizinga MM, Liu SK, Mills P, Neily J, Nelson W, Pronovost PJ, Provost L, Rubenstein LV, Speroff T, Splaine M, Thomson R, Tomolo AM, Watts B. The SQUIRE (standards for QUality improvement reporting excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008;17(Suppl 1):i13–32.

    Article  Google Scholar 

  22. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9.

    Article  Google Scholar 

  23. Hoque DME, Kumari V, Hoque M, Ruseckaite R, Romero L, Evans SM. Impact of clinical registries on quality of patient care and clinical outcomes: a systematic review. PLoS One. 2017;12(9):e0183667.

    Article  Google Scholar 

  24. Thomas KG, Thomas MR, Stroebel RJ, McDonald FS, Hanson GJ, Naessens JM, Huschka TR, Kolars JC. Use of a registry-generated audit, feedback, and patient reminder intervention in an internal medicine resident clinic—a randomized trial. J Gen Intern Med. 2007;22(12):1740–4.

    Article  Google Scholar 

  25. MacLean CD, Gagnon M, Callas P, Littenberg B. The Vermont diabetes information system: a cluster randomized trial of a population based decision support system. J General Intern Med. 2009;24(12):1303–10.

    Article  Google Scholar 

  26. Peterson A, Gudbjornsdottir S, Lofgren UB, Schioler L, Bojestig M, Thor J, Andersson Gare B. Collaboratively improving diabetes Care in Sweden Using a National Quality Register: successes and challenges-a case study. Qual Manag Health Care. 2015;24(4):212–21.

    Article  Google Scholar 

  27. Peterson KA, Radosevich DM, O'Connor PJ, Nyman JA, Prineas RJ, Smith SA, Arneson TJ, Corbett VA, Weinhandl JC, Lange CJ, Hannan PJ. Improving diabetes Care in Practice: findings from the TRANSLATE trial. Diabetes Care. 2008;31(12):2238–43.

    Article  Google Scholar 

  28. Bricker PL, Baron RJ, Scheirer JJ, DeWalt DA, Derrickson J, Yunghans S, Gabbay RA. Collaboration in Pennsylvania: rapidly spreading improved chronic care for patients to practices. J Contin Educ Heal Prof. 2010;30(2):114–25.

    Article  Google Scholar 

  29. Baty PJ, Viviano SK, Schiller MR, Wendling AL. A systematic approach to diabetes mellitus care in underserved populations: improving care of minority and homeless persons. Fam Med. 2010;42(9):623–7.

    PubMed  Google Scholar 

  30. Toh MP, Leong HS, Lim BK. Development of a diabetes registry to improve quality of care in the National Healthcare Group in Singapore. Ann Acad Med Singap. 2009;38(6):546.

    PubMed  Google Scholar 

  31. Han W, Sharman R, Heider A, Maloney N, Yang M, Singh R. Impact of electronic diabetes registry 'meaningful use' on quality of care and hospital utilization. J Am Med Informatics Assoc. 2016;23(2):242–7.

    Article  Google Scholar 

  32. Lail J, Schoettker PJ, White DL, Mehta B, Kotagal UR. Applying the chronic care model to improve care and outcomes at a pediatric medical center. Jt Comm J Qual Patient Saf. 2017;43(3):101–12.

    Article  Google Scholar 

  33. Jakobsen E, Palshof T, Osterlind K, Pilegaard H. Data from a national lung cancer registry contributes to improve outcome and quality of surgery: Danish results. Eur J Cardiothorac Surg. 2009;35(2):348–52 discussion 352.

    Article  Google Scholar 

  34. Jakobsen E, Green A, Oesterlind K, Rasmussen TR, Iachina M, Palshof T. Nationwide quality improvement in lung cancer care: the role of the Danish lung Cancer group and registry. J Thorac Oncol. 2013;8(10):1238–47.

    Article  Google Scholar 

  35. Kraynack NC, McBride JT. Improving care at cystic fibrosis centers through quality improvement. Semin Respir Crit Care Med. 2009;30:547–58.

    Article  Google Scholar 

  36. Stern M, Niemann N, Wiedemann B, Wenzlaff P, German CFQA Group. Benchmarking improves quality in cystic fibrosis care: a pilot project involving 12 centres. Int J Qual Health Care. 2011;23(3):349–56.

    Article  Google Scholar 

  37. Siracusa CM, Weiland JL, Acton JD, Chima AK, Chini BA, Hoberman AJ, Wetzel JD, Amin RS, McPhail GL. The impact of transforming healthcare delivery on cystic fibrosis outcomes: a decade of quality improvement at Cincinnati Children's Hospital. BMJ Qual Saf. 2014;23(Suppl 1):i56–63.

    Article  Google Scholar 

  38. Moller JH, Hills CB, Pyles LA. A multi-center cardiac registry. A method to assess outcome of catheterization intervention or surgery. Prog Pediatr Cardiol. 2005;20(1):7–12.

    Article  Google Scholar 

  39. Dziuban SW, McIlduff JB, Miller SJ, Dal Col RH. How a New York cardiac surgery program uses outcomes data. Ann Thorac Surg. 1994;58(6):1871–6.

    Article  Google Scholar 

  40. Halpin LS, Barnett SD, Burton NA. National databases and clinical practice specialist: decreasing postoperative atrial fibrillation following cardiac surgery. Outcomes Manag. 2004;8(1):33–8.

    PubMed  Google Scholar 

  41. Beaulieu PA, Higgins JH, Dacey LJ, Nugent WC, DeFoe GR, Likosky DS. Transforming administrative data into real-time information in the Department of Surgery. Qual Saf Health Care. 2010;19(5):399–404.

    PubMed  Google Scholar 

  42. Carlhed R, Bojestig M, Peterson A, Aberg C, Garmo H, Lindahl B, Quality Improvement in Coronary Care Study Group. Improved clinical outcome after acute myocardial infarction in hospitals participating in a Swedish quality improvement initiative. Circ Cardiovasc Qual Outcomes. 2009;2(5):458–64.

    Article  Google Scholar 

  43. Adams CE, Wilson M, Haney M, Short R. Using the outcome-based quality improvement model and OASIS to improve HMO patients' outcomes. Outcome assessment and information set. Home Healthc Nurse. 1998;16(6):395–401.

    Article  CAS  Google Scholar 

  44. Bauer AM, Azzone V, Goldman HH, Alexander L, Unützer J, Coleman-Beattie B, Frank RG. Implementation of collaborative depression management at community-based primary care clinics: an evaluation. Psychiatr Serv. 2011;62(9):1047–53.

    Article  Google Scholar 

  45. Lee TH, Pearson SD, Johnson PA, Garcia TB, Weisberg MC, Guadagnoli E, Cook EF, Goldman L. Failure of information as an intervention to modify clinical management: a time-series trial in patients with acute chest pain. Ann Intern Med. 1995;122(6):434–7.

    Article  CAS  Google Scholar 

  46. Fonarow GC, Reeves MJ, Smith EE, Saver JL, Zhao X, Olson DW, Hernandez AF, Peterson ED, Schwamm LH, GWTG-Stroke Steering Committee and Investigators. Characteristics, performance measures, and in-hospital outcomes of the first one million stroke and transient ischemic attack admissions in get with the guidelines-stroke. Circ Cardiovasc Qual Outcomes. 2010;3(3):291–302.

    Article  Google Scholar 

  47. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, O’Brien MA, Johansen M, Grimshaw J, Oxman AD. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev Reviews 2012, Issue 6. Art. No.: CD000259. p 1–229.

  48. van der Veer SN, de Keizer NF, Ravelli ACJ, Tenkink S, Jager KJ. Improving quality of care. A systematic review on how medical registries provide information feedback to health care providers. Int J Med Inform. 2010;79(5):305–23.

    Article  Google Scholar 

  49. Lim T, Goh A, Lim Y, Morad Z. Use of renal registry data for research, health-care planning and quality improvement: what can we learn from registry data in the Asia–Pacific region? Nephrology. 2008;13(8):745–52.

    Article  Google Scholar 

  50. Porter ME, Lee TH. The strategy that will fix health care. Harv Bus Rev. 2013;91(12):24.

    Google Scholar 

  51. Govaert JA, Govaert MJ, Fiocco M, van Dijk WA, Tollenaar RA, Wouters MW. Hospital costs of colorectal cancer surgery for the oldest old: a Dutch population-based study. J Surg Oncol. 2016;114(8):1009–15.

    Article  Google Scholar 

  52. McNamara RL, Spatz ES, Kelley TA, Stowell CJ, Beltrame J, Heidenreich P, Tresserras R, Jernberg T, Chua T, Morgan L, Panigrahi B, Rosas Ruiz A, Rumsfeld JS, Sadwin L, Schoeberl M, Shahian D, Weston C, Yeh R, Lewin J. Standardized Outcome Measurement for Patients With Coronary Artery Disease: Consensus From the International Consortium for Health Outcomes Measurement (ICHOM). J Am Heart Assoc. 2015;4(5). https://doi.org/10.1161/JAHA.115.001767.

  53. Porter ME, Larsson S, Lee TH. Standardizing patient outcomes measurement. N Engl J Med. 2016;374(6):504–6.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

We thank Carla Sloof for her support in formulating the search strategy, the English language professional Valesca Hulsman for editing, and Jozé Braspenning who performed a critical review of the manuscript.

Funding

This study was supported by The Netherlands Organisation for Health Research and Development (ZonMw) under project number 842001005. The funder had no influence on the study design, collection, analysis and interpretation of the data, the writing of the report, and the decision in where to submit the article for publication.

Availability of data and materials

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

NK, NZ, PvdN, PvdW, SG and GW (hereinafter referred to as ‘all authors’) contributed to the conception and design of the study. The first two authors, NK and NZ, shared equally in the development and execution of the work. NK and NZ were the major contributors in writing the manuscript. NK, NZ, PvdN, PvdW, SG analyzed and interpreted the data. NK, NZ, PvdN, PvdW, SG had full access to all data in the study and take responsibility for the integrity of the data and accuracy of the data analysis. All authors contributed to the analysis and interpretation, and provided drafting of the article. All authors contributed to the revision of the article for important intellectual content. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Nynke A. Kampstra.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Appendix 1. Caption: Search string PubMed, Embase and Cochrane. Appendix 2. Caption: Eligibility Form Data collection form for intervention reviews: RCTs and non-RCTs. Appendix 3. Caption: Quality Improvement Minimum Quality Criteria Set (QI-MQCS) items. From: Hempel, Susanne, Paul G. Shekelle, Jodi L. Liu, Margie Sherwood Danz, Robbie Foy, Yee-Wei Lim, Aneesa Motala, and Lisa V. Rubenstein. Development of the Quality Improvement Minimum Quality Criteria Set (QI-MQCS): a tool for critical appraisal of quality improvement intervention publications. BMJ quality & safety (2015): bmjqs-2014. Appendix 4. Caption: Detailed Summary of Included Studies. Appendix 5. Caption: Scoring of the Downs & Black criteria, SQUIRE guidelines and additional self-developed tool. Notes: a From the Downs & Black questionnaire, question 2 and 18 have been used (Downs & Black 1998). b From the SQUIRE guidelines, question 10c,11a and 11b have been used (Ogrinc et al., 2008). (DOCX 101 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kampstra, N.A., Zipfel, N., van der Nat, P.B. et al. Health outcomes measurement and organizational readiness support quality improvement: a systematic review. BMC Health Serv Res 18, 1005 (2018). https://doi.org/10.1186/s12913-018-3828-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-018-3828-9

Keywords