Article
A comparison of methods for meta-analysis of small number of studies with binary outcomes
Search Medline for
Authors
Published: | March 6, 2018 |
---|
Outline
Text
Background: Meta-analyses often include only few studies. Estimating between-study heterogeneity is difficult in this case. An inaccurate estimation of heterogeneity can result in biased effect estimates and too narrow confidence intervals (CIs) in random effects meta-analysis, which is especially true when using the standard random effects model with the DerSimonian-Laird (DLRE) estimator.
Methods: We compared the DLRE method with the modified Hartung-Knapp (mHK) method and the beta-binominal (BB) model considering odds ratios. For the comparison of the methods for meta-analysis of few studies (≤5) we performed a simulation study which used true parameters from actually performed meta-analyses. Furthermore we used an empirical example of an actually performed health technology assessment report including three studies on Sipuleucel-T for prostate cancer.
Results: In our simulation study all methods showed only small bias of the pooled effect estimates. The mHK method and the BB model (but not the DLRE method) kept the desired 95% empirical coverage probability (proportion of the time that the interval contains the true value of interest). Overall, the mHK method performed best considering empirical coverage. The power was low for all methods, especially the mHK only very rarely detected an existing effect. In our example all methods showed the same direction. The odds ratio was 3.32, 3.81, 3.81 for BB, DLRE and mHK, respectively. The CIs of the BB model and the mHK method showed a statistically significant difference, i.e. overlapped 1 (BB: 2.14 to 5.16; DLRE: 1.77 to 8.24). Although, two of the three studies in the meta-analysis were statistically significant and one was nearly statistically significant, the 95% CIs of the mHK method suggested a not statistical significant difference 0.71 to 20.70).
Conclusion: Bias of pooled effect estimates is small for all methods. Balancing correct empirical coverage and power is especially difficult in meta-analysis of few studies. Length of CIs between methods can differ. Consequently using different methods can lead to different conclusions. The example shows that the power of the individual included studies might be higher than the power of the meta-analysis. Therefore, in case of ≤5 studies in meta-analysis basing the conclusion on a qualitative synthesis of the individual included studies might be more adequate than referring to the pooled effect estimate.