Introduction

Patient-reported outcomes (PROs) during and after treatment for cancer provide an invaluable metric of care from the patient perspective. When measured serially, PROs can inform the full spectrum of cancer care, from diagnosis to survivorship, by monitoring the patient between visits, identifying increased risk for poor outcomes, and personalizing care [1,2,3,4,5]. Since its inception in 2009, the BREAST-Q patient-reported outcome measure (PROM) has been extensively validated [6,7,8,9,10,11] and utilized to quantify the impact of reconstructive surgery on health-related quality of life (HR-QOL) and patient satisfaction among women undergoing breast reconstruction after mastectomy [12, 13]. In 2017, the International Consortium for Health Outcomes Measurement listed the Satisfaction with Breast domain of the BREAST-Q as part of a standard set of patient-centered outcomes that matter most to women with breast cancer and that should be routinely collected in clinical practice [14]. However, despite being considered the gold standard PROM for breast reconstruction [14,15,16] and widely adopted in PROs research, the broad adoption of the BREAST-Q in clinical care has been slower.

Low patient and clinician participation is a challenge that undermines the routine, longitudinal utilization of PROMs in clinical practice [5, 17,18,19,20]. There are few longitudinal BREAST-Q studies in the surgery research literature; a 2016 systematic review noted that only 29% of PRO studies using BREAST-Q were prospective, while 71% were cross-sectional [21]. BREAST-Q completion rates have been reported as low as 27% [22, 23], demonstrating the need for increased investment and investigation into strategies for improving patient participation in this important health metric for both clinical care and research. One strategy evaluated in areas outside of breast reconstruction is the use of electronic PROM administration; however, studies have shown low sustainability with decreasing patient compliance when electronic PROM capture methods were not coupled with additional reminders or manual administration by research or clinical staff. [24,25,26,27]

At our institution, the BREAST-Q has been administered electronically as part of routine clinical care since 2011. We used a passive approach consisting only of email reminders, leading to suboptimal completion rates. Here, we describe the key components of a 2018 quality improvement initiative designed to increase BREAST-Q completion rates and optimize sustainable implementation of the BREAST-Q for routine clinical care. Quality improvement, or the continued efforts to improve patient outcomes, systems performance and professional development [28], can utilize numerous frameworks. In this initiative, we utilized the Plan Do Study Act (PDSA) framework [29, 30]. We also identify factors that contribute to low BREAST-Q response when using a passive approach to patient engagement.

Methods

BREAST-Q

The BREAST-Q is a PROM developed specifically to assess patient satisfaction and quality-of-life for patients undergoing breast surgery for prevention or treatment of cancer, including breast conserving therapy, mastectomy, and breast reconstruction, and for non-cancer indications, including breast augmentation and breast reduction. Modules have been designed to be specific to each procedure type. Crucially, the BREAST-Q was developed with patients in mind; all questions included in the BREAST-Q were created through structured interviews and focus groups with breast surgery patients and tested by a large cohort of breast surgery patients. The BREAST-Q is divided into 6 main domains: Satisfaction with Breast, Satisfaction with Outcome, Satisfaction with Care, Physical Wellbeing, Psychosocial Wellbeing, and Sexual Wellbeing. Each domain can be administered separately and can be scored independently. Scores are generated by transforming raw scores into a 0–100 QScore through Rasch score conversion. Patients must complete at least 50% of the questions in a domain for that domain to be scored. The entire BREAST-Q takes about 10 min to complete. Further details regarding BREAST-Q administration can be found at https://qportfolio.org/breast-q/breast-cancer/.

BREAST-Q administration, 2011–2017

At Memorial Sloan Kettering Cancer Center (MSK), we have been administering the BREAST-Q Reconstruction module electronically to be used for clinical care of breast reconstruction patients since 2011. From 2011 to 2017, we were assigning the BREAST-Q to patients online through our institutional software for clinical research survey administration, Web Core. BREAST-Qs were assigned at specific time points relative to a patient’s initial surgery date (e.g., pre-operative, 2 week post-operative, 1 month post-operative, etc.). This passive approach to patient participation used a time-point driven methodology for BREAST-Q administration that consisted of patients receiving only email reminders and only when a specific assessment time point approached. Furthermore, BREAST-Q scores often were not easily accessible or available for review at the time of clinical visits, so providers were unable to include BREAST-Q as part of discussions with patients.

Quality improvement initiative

In 2018, the Plastic and Reconstructive Surgery Service at MSK began a quality improvement initiative to increase BREAST-Q completion rates. The overarching goal of our initiative was to reframe BREAST-Q as a tool for routine clinical practice. The BREAST-Q should be an essential, routine part of the clinical encounter. Our main change was therefore to administer the BREAST-Q prior to every clinical encounter, regardless of the patient’s time point from surgery. By using this approach, we aimed to link, for both patients and clinical staff, the BREAST-Q to clinical visits, such that the assessment of patient satisfaction and quality of life becomes an expected part of postoperative follow-up. We then utilized several strategies to support the transition and sustainability of this new approach to PROM implementation.

First, we started using an in-house electronic PROM platform, MSK Engage, which is connected to both the online patient portal, MyMSK, and the electronic health record. MSK Engage assigns specific codes based on clinic visit type and procedure and uses these codes to identify breast reconstruction patients within the system who have a clinic visit scheduled and assign them the BREAST-Q within 1 month prior to their scheduled visit. MSK Engage also flags these patients so clinical staff can cross-reference these flagged patients with daily patient lists. Patients who did not complete the assessment at home prior to their visit are asked to complete it in clinic on a tablet. All results are securely stored through MSK Engage on an institutional database and can be immediately accessed by providers through the electronic health record during the clinical encounter. Providers can see not only the most recent BREAST-Q score but also score trends starting from the patient’s first completed BREAST-Q, allowing them to understand how satisfaction and wellbeing have progressed throughout the postoperative recovery period.

Second, each clinical site was appointed a “BREAST-Q Champion,” who served as a clinical site leader and liaison. Champions had extensive experience with the personnel, workflow, and tablet inventory of each site and were familiar with BREAST-Q and MSK Engage. In this role, Champions were responsible for managing BREAST-Q-related matters at individual clinics, identifying and summarizing shortcomings in clinic workflow, compiling front office staff and patient feedback, and attending weekly meetings with the clinical research coordinator.

Third, we implemented real-time monitoring of BREAST-Q completion rates by utilizing an online dashboard (Tableau Software, LLC., Seattle, WA) that summarized and visualized MSK Engage data on a daily and weekly basis. Figure 1 BREAST-Q completion rates were summarized in a site-specific, as well as provider-specific, manner and could be compared across clinic sites and providers, allowing more effective identification of where additional process improvement was necessary. This dashboard also served as a source of accountability and encouragement since performance data was emailed to the project leaders and the site champions.

Fig. 1
figure 1

Example of the online dashboard used to track BREAST-Q completion rates

Most important, this quality improvement initiative was a live process, involving multiple PDSA (Plan-Do-Study-Act) cycles and active engagement from all stakeholders. Figure 2 Throughout this initiative, clinical staff, BREAST-Q Champions, and clinical research coordinators regularly met to troubleshoot current workflows and elicit feedback, resulting in meaningful and sustainable changes. For example, clinical staff began consistently using the term “assessment” in reference to the BREAST-Q because clinical staff found that patients were more willing to complete an “assessment” rather than a “survey,” as the latter term connoted a research-only, rather than a patient-care, purpose.

Fig. 2
figure 2

Plan-Do-Study-Act diagram of quality improvement initiative

Study design

After approval from the Institutional Review Board (IRB #18–202), we conducted a retrospective review of all postmastectomy breast reconstruction patients at MSK who were asked to complete at least one BREAST-Q from January 2011—start of routine use of BREAST-Q in clinical care—to July 2020.

Data were collected to determine the number of BREAST-Qs requested and the number of BREAST-Q responses. Demographic data of patients who did and did not respond to BREAST-Q requests included age, race, marital status, and insurance type. We also collected data on comorbidities, such as diabetes, hypertension, history of psychiatric diagnoses, lymphedema, and smoking, as well as on aspects of their cancer and reconstruction care, such as chemotherapy, radiation, and type of breast reconstruction. A BREAST-Q assessment was not included in analysis of completion rates if the date relative to reconstructive procedure could not be determined (e.g., survey time point not recorded, surgery date missing).

Statistical analysis

BREAST-Q completion rates were determined by dividing the total number of requested BREAST-Qs by the total number of completed BREAST-Qs during the time period. Comparison of BREAST-Q completion rates before and after implementation of quality improvement initiatives was conducted using a chi-squared test. Descriptive statistics were used to summarize demographic characteristics of BREAST-Q responders and non-responders. Responders were defined as patients who completed at least one requested BREAST-Q, while non-responders were defined as patients who did not complete any BREAST-Qs. BREAST-Q responders and non-responders were assessed on differences in baseline demographics, comorbidities, and cancer treatment characteristics using the independent samples Student t-test (continuous variables) and the Pearson chi-square or Fisher’s exact test (categorical variables, where appropriate depending upon expected cell frequency size). Means are presented as Mean + Standard deviation. All tests with a p-value of < 0.05 were considered statistically significant; all statistical analyses were conducted in R (version 3.6.2, packages: tidyverse, readxl).

Results

Annual BREAST-Q completion rates for 2011–2019

From January 2011 to December 2019, 41,981 BREAST-Q assessments were collected, out of 109,435 requested BREAST-Qs. Of the 41,981 BREAST-Q assessments, 3,399 were excluded because reconstructive surgery date was missing, resulting in 38,582 BREAST-Qs available for analysis. Prior to our quality improvement interventions (2011–2017), 22,730 BREAST-Q assessments were collected (58.9% of all collected assessments), while, after quality improvement began (2018–2019), 15,852 assessments were collected (41.1% of all collected assessments).

Annual BREAST-Q completion rates were determined for BREAST-Qs requested less than or at 2 years postoperatively (Fig. 3). Prior to the intervention in 2018, annual completion rates ranged from 36.7 to 48.2%, with an overall completion rate of 42.8% over the 7 year period. After the intervention, annual completion rates were 63.3% in 2018 and 87.6% in 2019, with an overall completion rate of 76.4% over the 2 year period. Completion rates before and after intervention were significantly different on chi-square analysis (p < 0.001). Of note, 61.4% of all BREAST-Qs collected from 2011 to 2019 were completed in the 7-year period before the intervention; 38.6% of all BREAST-Qs were collected in the 2 year period after the intervention (Table 1).

Fig. 3
figure 3

Number of BREAST-Qs completed by patients who were less than 2 years after surgery

Table 1 Annual BREAST-Q completion rates by year, 2011–2019, for assessments requested less than or equal to 2 years after reconstruction date

When analyzing BREAST-Q assessments based on time from surgery, 20.8% were received immediately after surgery, with 8022 BREAST-Qs received in the 0–3 months postoperative time point. In addition, almost an equal proportion of assessments were received from patients who were at least 5 years after their initial reconstruction (7533 assessments, 19.5%). The proportion of assessments at each time point completed in 2018–2019 ranged from 24.3% (27–30 months) to 57.2% (> 60 months) (Fig. 4).

Fig. 4
figure 4

Number of BREAST-Qs completed from 2011 to 2019 by time from initial reconstructive surgery

Effect of COVID-19 on BREAST-Q completion rates

From January 2020 to July 2020, 3,726 of 4,412 BREAST-Qs were completed, for an overall completion rate of 84.5%. The proportion of BREAST-Qs completed at home through the online patient portal or at clinic visits on an in iPad in 2020 was compared to that in 2019 for the months of January to July in order to capture the potential effect of the COVID-19 pandemic on patient choice of BREAST-Q completion method (Fig. 5). From January 2019 to June 2019, 38.8% of 6,444 BREAST-Qs were completed online prior to clinic; from January 2020 to June 2020, 49.7% of 3,726 BREAST-Qs were completed online prior to clinic (p < 0.001). An upward trend in at-home BREAST-Q completion was seen in 2020 but not in 2019, with 70% of all July 2020 BREAST-Qs completed using the online portal.

Fig. 5
figure 5

Proportion of BREAST-Qs completed at home via the online portal versus in-clinic via electronic tablet from January–July 2019 vs. January–July 2020

BREAST-Q responder vs. non-responder characteristics

Due to the low completion rates before the quality improvement intervention, we compared the demographic and clinical characteristics of the BREAST-Q responder and non-responder for the 2011–2017 time period (Table 2). In total, 7119 unique patients were identified: 6262 (88.0%) responders and 857 (12.0%) non-responders. Responders were slightly younger than non-responders (49.7 ± 10.2 vs. 52.2 ± 10.3, p < 0.001). A significantly larger proportion of responders were white (76.9 vs. 73.6%, p = 0.0015) and had private insurance (79.4 vs. 69.8%, p < 0.001). No significant differences were noted in terms of marital status, comorbidities, type of reconstruction, or cancer treatment.

Table 2 Demographics of BREAST-Q responders and non-responders from 2011 to 2017

Discussion

Lessons learned from the MSK BREAST-Q experience

After the implementation of our quality improvement intervention in 2018, our BREAST-Q completion rates increased dramatically, from 42.8% in the year prior to our intervention to 87.6% in 2019. These data demonstrate the effectiveness of our intervention to improve patient completion rates for a clinically valuable PROM. Additionally, our high response rate indicates that we were able to improve engagement with patients who our non-responder analysis revealed were previously less likely to respond to the BREAST-Q. Our strategy used key components of existing PROM implementation frameworks and addressed weaknesses identified in other similar initiatives for routine PROM use in clinical care. Recent reviews [1, 31, 32] on PROM implementation have highlighted the importance of high clinician and staff engagement—including the concept of assigning stakeholders as team champions—seamless integration into the clinical workflow, and access to data in real-time for immediate utilization during the patient encounter. These elements all supported the administration of the BREAST-Q at every clinical encounter, which normalized BREAST-Q completion for both clinical staff and patients and reframed the purpose of the BREAST-Q as a patient care tool that can personalize clinical encounters.

Financial cost, survey fatigue, and unsustainability of high provider and staff engagement are common concerns regarding the feasibility of routine longitudinal use of PROMs [31, 33]. A limitation to the generalizability of our initiative is the high upfront resource investment in tools like Tableau and MSK Engage as well as substantial clinical and research support staff. Pronk et al. examined the costs associated with minimal (digital automated system only) versus maximal effort (combined digital automated system with manual collection) PROM implementation in orthopedic procedures and demonstrated considerable completion rate increases (44 to 76%) using maximal effort but was more costly to implement [26]. However, unlike Pronk et al. and other researchers in the surgical literature [34,35,36,37], our BREAST-Q administration was not tied to a particular surgical time point but rather was integrated into the daily clinic workflow and expected for nearly every patient. This implementation method eliminates the cost of dedicated personnel for monitoring patient completion or determining patient eligibility for PROM completion. Therefore, an initial high investment in streamlining PROM administration technology may be balanced by significantly lower and more sustainable maintenance costs. Indeed, in our quality improvement initiative, all BREAST-Q assignments were automated and no additional clinical or research staff were needed to collect BREAST-Qs from patients.

In terms of long-term unsustainability and survey fatigue [31, 33], a recent systematic review found that prospectively maintained PROM registries experienced a drop in completion rates from 75% at baseline to 61% by 2 years and 50% by 5 years. [27] It may therefore seem counterintuitive that increasing the number of BREAST-Qs requests per patient would improve completion rates, but this step was also crucial since it changed the patient mindset to one in which the BREAST-Q was essential to patient care. From January to July 2020, during the initial period of the COVID-19 pandemic, our patients continued to complete the BREAST-Q at rates comparable to the same period the previous year. Interestingly, patients completed more assessments online through our web portal, without the usual prompting that occurs at in-person office visits. We theorize that this is a response to reframing patient expectations, such that patients have become accustomed to being assessed on PROs and, most importantly, understand that PROMs are used to provide individualized care during clinical visits. Similar successes in patient adherence have been seen in electronic PROM methods for symptom reporting in chemotherapy patients, where completed assessments were summarized into reports or alerts that providers could review with patients during clinical visits. [38, 39]

The main limitation of this study is that the intervention time period is shorter than the pre-intervention time period. It is possible that the high completion rates we see after intervention will not be sustained over time, resulting in overestimation of the success of our initiative. Our clinic-based intervention may also not increase completion rates if patients do not return for follow-up. Further research is therefore warranted to measure the long-term sustainability of our implementation methodology. In addition, modifications may be necessary to apply these methods to other institutions. However, the central philosophy of using PROMs as a routine clinical tool is likely highly generalizable and will allow for successful implementation of PROMs in many clinical contexts.

An implementation framework for any PROM at any institution

Modifications may be necessary to apply these methods to other institutions, but we believe that the central philosophy of using PROMs as a routine clinical tool is likely highly generalizable and can be a driving force for successful implementation of PROMs in many clinical contexts. First, however, the right PROM must be selected. The BREAST-Q was ready for use as a routine clinical tool because it was developed and validated to be clinically relevant to patients and clinicians. When BREAST-Q data is available, we have used it to guide clinical care, including recommending further surgical intervention and providing appropriate referrals to services such as physical therapy or sexual health. Other PROMs that have not undergone the same rigorous development and validation process may not be as successful if the information gathered is not useful or applicable to patient care.

Once an appropriate PROM has been selected, the next consideration is setting up the infrastructure to collect, store, and display PROM data. We recommend the use of electronic PROM collection methods to streamline this process. Electronic PROM collection can also help ensure the availability of PROM data at the time of the clinical visit since patients can complete PROMs prior to their visit. This can be costly to build, but we assert that making PROM administration routine instead of tied to a particular schedule can greatly reduce the complexity of the programming needed to implement PROMs as well as the time and energy needed from clinical staff to monitor PROM collection. The accessibility of technological infrastructure for PROM implementation can be further improved if we create standard modules that can be integrated into the electronic health records. Any clinical practice can incorporate these ready-made modules with some tailoring based on the specific PROM desired and clinical workflow requirements.

Finally, even with electronic PROM administration, we encourage clinical practices to designate a “PROM champion,” at least at the beginning, so that there is a point person to track PROM completion rates and respond to any inefficiencies in the system. The PROM champion should ideally be an existing member of the clinical staff as they will have a clear understanding of the clinical workflow and how best to integrate the PROM into the workflow. Such champions have been demonstrated to be key components of quality improvement initiatives [40,41,42], and in fact may be one of the most important aspects to improving PROM administration [43]. Clinical practices can choose to use technology like Tableau to monitor completion rates, but this is likely not necessary for smaller organizations. These champions may improve the long-term sustainability of PROM implementation by streamlining the process initially and by performing routine (semi-annual, annual) check-ins to address any issues that may arise as practices change over time.

Conclusion

Despite increasing evidence supporting their value in routine clinical care, the actual implementation of PROMs in this context is a major challenge. We describe our institution’s experience with a PROM for breast reconstruction, BREAST-Q. Our strategy focused on high patient, provider, and staff engagement by reframing the BREAST-Q as a routine clinical tool and on integrating the BREAST-Q into the clinical workflow using new technologies. As this strategy was successful in significantly increasing patient participation and engagement in the BREAST-Q, similar implementation techniques may prove beneficial at other institutions interested in incorporating PROMs into routine care.