What is new?
When using patient reported outcomes (PROs) to evaluate the effects of an intervention, an outcome may change owing to persons reconceptualizing or recalibrating the outcome rather than truly changing. This article provides a statistical framework for examining this phenomenon in the context of a randomized clinical trial.
Health care interventions targeting improvement of patient-reported outcomes, such as symptoms, function, and health-related quality of life (HRQL) can induce both true change and response shift. Response shift is defined as changes in a person's self-evaluation, resulting from changes in internal standards or recalibration of the measurement scale; changes in the definition or conceptualization of the construct; and/or changes in values or in the prioritization of domains within the construct [1]. Stroke is one condition where response shift is a likely scenario because of the sudden onset of loss of function that recovers at a variable rate over the ensuing days to months. The health care system offers most services early on after stroke and emphasizes improving impairments and activity limitations. Beyond 3 months, formal rehabilitation ceases and persons are encouraged in community participation with the ultimate aim of reaching an optimal HRQL. This shift in focus from impairment to participation and quality of life would induce response shift as stroke recoverers learn to value what they can do despite impairments rather than focusing on the limitation itself. Evaluating interventions is complicated by spontaneous recovery, change in outcome emphasis, and response shift. For example, in trials where the intervention involves components of care and support by a team of health professionals (e.g., trials of rehabilitation services, continuity of care, home care, family support, community interventions, and so forth), the intervention may act as a catalyst for participants to recalibrate, reconceptualize, or reprioritize constructs that are included in measures such as of HRQL.
In a trial, both groups at randomization likely start with the same conceptualization of the outcome and the same internal standard of measurement. Through the intervention, the treatment arm may acquire new information and knowledge about stroke (prevention, consequences) and ways of coping with the consequences of the disease (functional or social limitations), and may be encouraged to interact with others who are going through the same experience (support groups that may initiate social comparisons, and so on). In this situation, changes because of response shift and the designed-in components of the intervention become difficult to disentangle. In many instances, response shift is integral to the intervention and there is really no need to untangle the separate effects. However, if elements of the intervention act to improve some outcomes but response shift acts in the opposite direction on other outcomes, then separating out these effects becomes crucial.
The literature on early supported discharge poststroke offers several examples where response shift could have occurred [2], [3], [4], [5]. For example, three of these studies found that the intervention group as compared with the control group had significantly higher scores on objective clinical measures, performance-based measures, or measures of activities of daily living, but not on HRQL [2], [3], [5]. The intervention offered by the home care team may have acted as a catalyst for the initiation of response shift in the treatment group, perhaps making them more aware earlier of the impact of stroke on their lives as therapy and care were offered at home. This effect may have been delayed for the comparison group and this differential effect would make the two groups incomparable. A simple comparison of mean scores, as is traditionally done, would attenuate or exaggerate differences in HRQL between the groups, or the groups would appear to have similar levels of HRQL, as was the case for these studies on early supported discharge. If assessments of change in HRQL do not account for possible response shift, results could be misleading regarding the added benefit of an intervention and in turn could affect health care policy planning. Although these and other trials [6] suggest that response shift can occur, methods for directly assessing response shift were not included.
Methods for response shift investigation are often classified as design-based or statistical but an alternate classification might be individual- or group-based [7]. Design-based methods include the then-test [8], [9] and personalized interviewing [10] and yield information on individuals but they must be put in place in the study from the outset.
Statistical methods up until recently were mainly group-based and assessed average response shift using techniques such as structural equation modeling (SEM) [11], a technique requiring large sample sizes. Mayo et al. [12] recently introduced a statistical approach assessing response shift at an individual level by examining the difference between a patient-reported outcome and what would have been predicted for this person based on other measured variables. The pattern of these residuals overtime was used to infer response shift.
We recently evaluated, through a randomized trial, an intervention aimed at assisting persons with stroke to make the transition from acute care to home. In this study, a nurse, through home visits and telephone monitoring, provided nursing interventions that included active surveillance of health status, information about stroke, medication management, assistance with accessing needed health care services, active listening, and family support [13]. A control group received no additional help with making the transition from hospital to home.
There were no statistically significant differences between groups on the primary outcome measure, self-rating of HRQL, nor on any of the secondary outcome measures or health services utilization. For this population, there was no evidence that this type of passive case management inferred any added benefit in terms of improvement in perceived health or reduction in health services utilization and stroke impact, than usual postdischarge management.
The infrastructure put in place for this trial permitted, for a subset of the sample, an evaluation of several aspects of response shift [7], [9], [14]. However, the effect of response shift on the trial results has not yet been examined. Given that methods for detecting response shift are few and those that are available are not part of the usual statistical approaches to data analysis, the specific objective of this article is to estimate, in one data set, the extent to which different methods of assessing response shift lead to different conclusions about the presence of response shift. We hypothesized that the Case Management group would experience a greater degree of recalibration as measured by the then-test than the Usual Care group. We also hypothesized that case management would induce response shift that would initially be negative (at 6 weeks) because the nurse manager would make people more aware of their need for care but by 6 months the intervention group would rebound, with people reporting higher levels of health than predicted by their stroke-related disability. We hypothesized that both reconceptualization and reprioritization response shifts would occur differentially between treatment groups and this would partially explain why there was no difference between the two groups. The ultimate aim of this project was to propose a framework for investigating response shift, in the context of a clinical trial evaluating an intervention with the potential to invoke a differential response shift between treatment groups.