Advances in the meta-analysis of heterogeneous clinical trials II: The quality effects model
Introduction
In 2008, we had provided a solution to the problems with the random effects (RE) model for meta-analysis using a quality adjusted model which we called the quality effects (QE) model [1], [2]. In the previous paper in this series, we also discussed a variant of the QE model called the inverse variance heterogeneity (IVhet) model that does not require quality assessment because all studies by default are assigned the same quality [3]. The initial problem was that, as heterogeneity increases, the coverage of the RE confidence interval drops well below the nominal level [4], substantially underestimates the statistical error and produces overconfident conclusions [5], [6]. In addition, we believe that the way the RE model modification of the inverse variance weights are conceptualized [7] lacks justification according to a strict view of randomization in statistical inference [8]. We therefore introduced these alternative models in an attempt to lower the estimator mean squared error and obtain a correct coverage of the confidence interval that keeps to the nominal level across different degrees of heterogeneity [1], [3].
We now demonstrate that input of quality into the model can markedly improve the performance measures of the estimator as compared with the conventional random effects estimator or the IVhet estimator that replaces it [3]. Additionally, because quality is often viewed with suspicion as extremely subjective, the performance measures are obtained after subjecting the quality input to various degrees of random variation (at the point of input to the model) to see how this affects the estimator performance. The QE model examined in this paper updates the QE model of meta-analysis proposed in 2008 [1], [2] in two important respects. First, overdispersion observed with the initial estimator has been corrected using an intra-class correlation based multiplicative scale parameter. Second, the quality scores were originally re-scaled between 0 and 1 for input into the model. Currently, they are still rescaled between 0 and 1 but then each of these rescaled scores is divided by the maximum value of the rescaled scores within the meta-analysis before it is input into the model. This still keeps the scores in the 0–1 range but now allows them to reflect the relative nature of these scores, i.e., relative to the best study in the meta-analysis. This will be discussed further in the next section.
Section snippets
Difference between the random and quality effects weighted means
Consider a collection of k independent studies, the jth of which has estimated effect size which varies from its true effect size, δj through random error. Also consider that the true effects, δj, also vary from an underlying common effect, θ, through bias. There is the possibility of some diversity of true effects (which remain similar) across studies (in which case θ would simply be the mean of the true (unbiased) effects). A greater diversity that leads to dissimilarity of effects would
Variance of the estimator under different models
The difference between the RE model and QE model is that the former has all ϕj2 replaced by γ2 and thus 1/(υj + γ2) = ŵj and there is a decreasing capacity to minimize error due to sampling variability by the weights as heterogeneity increases and weights equalize. In the case of the QE model, and thus when Qj varies across studies, this estimator will discount studies with both greater random error as well as internal study bias. The QE estimator will thus be expected, with
Examining estimator performance using simulation
We now proceed to examine the performance of the RE and QE estimators (Table 1) under varying degrees of heterogeneity. The odds ratio (OR) is used as the effect size (though the models can deal with any of the common effect measures) and the simulation is modeled around the magnesium meta-analysis [16] data which was previously reviewed by Al Khalaf et al. [17]. Based on this meta-analysis a simulation study was set-up fixing the true effect size as in each simulation an OR between 0.4 and 4
Real data examples
To compute the meta-analysis results using real data requires the following steps:
- a)
Quality assessment of individual studies using a quality scale and computing a univariate quality score. Each component is equally weighted given that we do not have sufficient information yet from meta-epidemiological studies to do otherwise. In the future differential weighting of quality components may be an option if data from such studies accrues.
- b)
Conversion of the univariate score to Qj by dividing each score
Discussion
The QE model estimate differs from the RE model estimate in two perspectives: Pooled QE estimates favor both larger and better trials (as opposed to penalizing larger trials with the RE model) and have a more conservative confidence interval that retains the nominal coverage probability. The implication for the meta-analysis of the magnesium studies in myocardial infarction (Fig. 3) is that the evidence for the intervention suggests less benefit when methodology is also assessed.
When quality
Funding
There was no external funding for this study.
Conflict of interest
JJB owns Epigear International Pty Ltd which sells the Ersatz simulation software used in this study.
References (21)
- et al.
Advances in the meta-analysis of heterogenous clinical trials I: the inverse variance heterogeneity model
Contemp. Clin. Trials
(2015) - et al.
Meta-analysis in clinical trials
Control. Clin. Trials
(1986) - et al.
Meta-analysis of heterogeneous clinical trials: an empirical example
Contemp. Clin. Trials
(2011) - et al.
Incorporating variations in the quality of individual randomized trials into meta-analysis
J. Clin. Epidemiol.
(1992) - et al.
Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses
J. Clin. Epidemiol.
(2011) - et al.
A quality-effects model for meta-analysis
Epidemiology
(2008) - et al.
An alternative quality adjustor for the quality effects model for meta-analysis
Epidemiology
(2009) - et al.
A comparison of statistical methods for meta-analysis
Stat. Med.
(2001) Confidence intervals for a random-effects meta-analysis based on Bartlett-type corrections
Stat. Med.
(2011)- et al.
Random-effects meta-analyses are not always conservative
Am. J. Epidemiol.
(1999)
Cited by (163)
The association between cadmium exposure and the risk of chronic obstructive pulmonary disease: A systematic review and meta-analysis
2024, Journal of Hazardous Materials