Introduction
Patient-reported outcomes (PROs) are frequently used as outcome measures in cancer clinical trials and in observational studies. More recently, they have also been introduced into daily clinical practice, where they provide clinicians and nurses with information about the symptom experience, functional health, and subjective well-being of patients that can be used during the clinical encounter. Although this feedback from PROs often leads to improved symptom detection [
1‐
3], more discussion of problems [
1‐
3], and higher levels of patient satisfaction [
2], only a few studies have found a direct impact on quality of life (QoL) [
4,
5].
Electronic data collection systems have been developed to facilitate the introduction of PROs in daily clinical practice. The major advantages of these electronic systems are that they facilitate efficient data collection and that PRO results are directly available [
6]. Most recently, PRO data collection systems have been integrated into Web-based patient portals and can be integrated into the electronic medical record. The use of an electronic data collection system facilitates graphical presentation of the PRO results. Graphs are especially useful for the display of dynamic data, such as change over time [
7].
To date, only limited information is available regarding how best to graphically summarize and display the results of PROs for both patients and health professionals. Several studies have investigated patients’ and health professionals’ understanding of graphically presented quality-of-life data at the group level, as obtained in clinical trials. These studies have shown that patients are most accurate in interpreting simple line graphs compared to simple bar charts or more complex graphs [
8,
9] and that professionals prefer line graphs presenting change over time [
10].
Individual PRO results are most likely to be presented as absolute scores at fixed time points. Although this allows for calculating and displaying change over time, the interpretation of an absolute score at a single time point is more challenging. The interpretation of absolute scores can be facilitated through the use of clinical thresholds that allow one to classify individual patients as a “case” [
11]. The caseness thresholds may reflect a priori decision rules regarding symptom severity or may be related to external criteria or percentiles from general population or patient reference groups. Such thresholds can be integrated into graphical displays of PRO results using color-coding methods that indicate the severity or clinical importance of a symptom or problem [
12‐
14].
Given the paucity of studies on the graphical presentation of individual-level PRO results, the aim of the current study was to investigate patients’ and health professionals’ understanding of and preferences for different graphical presentation styles for the EORTC QLQ-C30, a questionnaire frequently used to assess QoL in cancer patients [
15]. In addition, we asked patients and health professionals their opinions about general aspects of PRO data collection and use in daily clinical practice.
Discussion
In this study, we investigated cancer patients’ and health professionals’ understanding of and preferences for graphical presentation styles for individual-level PRO data obtained using the EORTC QLQ-C30 questionnaire. Patients’ objective and self-rated understanding were similar for the five graphical presentation styles, although they had a slight preference for bar graphs. Health professionals preferred heat maps, followed by non-colored bar charts and non-colored line charts. Their understanding of overall change was better for non-colored bar charts, and medical specialists were more accurate than other professionals in interpreting absolute scores. Self-rated understanding was substantially higher and did not differ significantly between professions or graphical presentation styles.
Compared with previous studies, the objective understanding of the patients in our study was relatively low; it varied from 42.8 to 76.7 %. In previous studies using group-level data, these figures did not fall below 80 % [
8,
17]. As educational levels of patients appear to be comparable across the different studies, this is not likely to explain these differences. However, in a study using individual-level data, the percentage of correct answers varied from 64 to 96 % [
18], which is also higher than the percentages we found. This rules out that the differences in observed understanding are caused by the use of group-level versus individual-level data. Possibly, the lower levels of understanding are due to the different types of graphical formats that were used in the studies.
Professionals’ understanding varied from 52.9 to 94.1 %, which is relatively low compared to the results of a recently published mixed-methods study, showing that oncologists answered 90–100 % of questions correctly [
18]. This difference might be due to the fact that we included professionals with different backgrounds, whereas in the mixed-methods study, only oncologists were included. However, with some exceptions, professionals’ understanding of the PRO results presented graphically was much higher than that of patients’. We suspect that this may be due to their familiarity with interpreting data, in general, as well as to the fact that some of the health professionals had had previous experience with PROs, in general, and the QLQ-C30, in particular. Within the group of health professionals, we found that medical specialists were better in interpreting absolute scores than nurses and other health professionals, possibly because medical specialists are more accustomed to interpreting numerical data and charts. Many participating professionals indeed indicated that they had previous experience with PROs, for example in clinical practice. As we only recruited professionals from the Netherlands Cancer Institute, a comprehensive cancer center, these results may not be representative of health professionals, in general.
It is noteworthy that the self-rated understanding of both patients and health professionals was much higher than objectively measured understanding. Respondents may have answered the question assessing their self-rated understanding in a socially desirable way, providing an overly optimistic view. This is in line with two studies on lay understanding of medical terms [
19,
20]. Self-rated understanding in this study did not differ as a function of graphical presentation style, whereas previous research has shown that line graphs were self-rated as easiest to understand [
8].
Our findings regarding preferences are not in line with findings from studies on group-level data, which report that line graphs are preferred by patients and professionals [
8,
10] or with a study on individual-level data in which line graphs were also preferred [
18]. However, in those studies the selection was not made from a set of chart types fully comparable to the options used in our study. This discrepancy may reflect a methods effect; if different combinations of graphs would be used, preferences might also differ.
We found that both patients and professionals preferred PROs to be completed once a month during treatment and every 3 months after treatment. The higher frequency during treatment seems reasonable, given that one could expect more fluctuation and change in symptoms and functional health during this period. These findings are in line with the considerations of Snyder and colleagues regarding the implementation of PROs in clinical practice [
11]. In addition, respondents in both groups indicated that they would prefer to compare current scores with a patient’s previous scores. Detecting worsening of symptoms and deterioration in functioning is particularly important in order to provide relevant care in a timely manner.
Our study has several limitations that need to be considered. First, although we investigated five graphical presentation styles, these did not represent all possible styles. Furthermore, patients were not shown all types, but only the non-colored or the colored ones (to prevent an exposure effect). Second, we only used hypothetical data, which might have led to an underestimation of objective understanding. Some patients explicitly indicated that the graphs were not representative of their health situation at the indicated time points. This suggests that these patients may have answered the questions with their own health status in mind, which could have been different from the health status shown in the graphs. Possibly their interpretation would be more accurate if these patients were to be provided with graphs reflecting their own health status. Another limitation of the study is that we were only able to survey health professionals from a single hospital.
Our study also had a number of strengths, including the use of a variety of graphical presentation styles, the use of colored and non-colored graphics, and inclusion of patients from a number of countries, with different diagnoses, and both on- and off-treatment. We were also able to include a sizeable number of health professionals representing a variety of professions.
Because particularly patients’ objective understanding was relatively low, it is important to learn more about how patients interpret and understand their individual graphically displayed PRO results. What are they thinking when they view such results? What information draws their attention? What do they understand and what do they not understand? These questions could be addressed via interviews in which patients are asked to verbalize what they are thinking when presented with graphs to interpret (a “think aloud” exercise [
21]) and/or to reflect on their thinking process in retrospect). The results of such a qualitative study could be used to develop educational materials to help patients better understand their PRO results. For example, a tutorial video could be developed in which instructions are provided about the interpretation of PRO results. Special attention should be paid to the interpretation of functioning versus symptom scales, as our study as well as another study [
18] showed differences in understanding between these types of scales. Such a video could also include a test to assess whether a patient fully understands the graphs. Comparable materials could be developed for professionals. Such a tutorial should focus not only on interpretation, but also on how to best provide care to and/or refer patients with clinically relevant QoL scores. In a previous study, professionals indeed indicated that they required help interpreting QoL data, and especially the clinical relevance of those data [
10].