gms | German Medical Science

Brücken bauen – von der Evidenz zum Patientenwohl: 19. Jahrestagung des Deutschen Netzwerks Evidenzbasierte Medizin e. V.

Deutsches Netzwerk Evidenzbasierte Medizin e. V.

08.03. - 10.03.2018, Graz

A comparison of methods for meta-analysis of small number of studies with binary outcomes

Meeting Abstract

Search Medline for

  • author presenting/speaker Tim Mathes - Institute für Forschung in der Operativen Medizin, Universität Witten/Herdecke
  • Oliver Kuß - Institut für Biometrie und Epidemiologie, Deutsches Diabetes-Zentrum; Institut für Statistik in der Medizin, Universitätsklinikum Düsseldorf

Brücken bauen – von der Evidenz zum Patientenwohl. 19. Jahrestagung des Deutschen Netzwerks Evidenzbasierte Medizin. Graz, Österreich, 08.-10.03.2018. Düsseldorf: German Medical Science GMS Publishing House; 2018. Doc18ebmV-03-5

doi: 10.3205/18ebm019, urn:nbn:de:0183-18ebm0195

Published: March 6, 2018

© 2018 Mathes et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

Background: Meta-analyses often include only few studies. Estimating between-study heterogeneity is difficult in this case. An inaccurate estimation of heterogeneity can result in biased effect estimates and too narrow confidence intervals (CIs) in random effects meta-analysis, which is especially true when using the standard random effects model with the DerSimonian-Laird (DLRE) estimator.

Methods: We compared the DLRE method with the modified Hartung-Knapp (mHK) method and the beta-binominal (BB) model considering odds ratios. For the comparison of the methods for meta-analysis of few studies (≤5) we performed a simulation study which used true parameters from actually performed meta-analyses. Furthermore we used an empirical example of an actually performed health technology assessment report including three studies on Sipuleucel-T for prostate cancer.

Results: In our simulation study all methods showed only small bias of the pooled effect estimates. The mHK method and the BB model (but not the DLRE method) kept the desired 95% empirical coverage probability (proportion of the time that the interval contains the true value of interest). Overall, the mHK method performed best considering empirical coverage. The power was low for all methods, especially the mHK only very rarely detected an existing effect. In our example all methods showed the same direction. The odds ratio was 3.32, 3.81, 3.81 for BB, DLRE and mHK, respectively. The CIs of the BB model and the mHK method showed a statistically significant difference, i.e. overlapped 1 (BB: 2.14 to 5.16; DLRE: 1.77 to 8.24). Although, two of the three studies in the meta-analysis were statistically significant and one was nearly statistically significant, the 95% CIs of the mHK method suggested a not statistical significant difference 0.71 to 20.70).

Conclusion: Bias of pooled effect estimates is small for all methods. Balancing correct empirical coverage and power is especially difficult in meta-analysis of few studies. Length of CIs between methods can differ. Consequently using different methods can lead to different conclusions. The example shows that the power of the individual included studies might be higher than the power of the meta-analysis. Therefore, in case of ≤5 studies in meta-analysis basing the conclusion on a qualitative synthesis of the individual included studies might be more adequate than referring to the pooled effect estimate.