gms | German Medical Science

22. Jahrestagung des Deutschen Netzwerks Evidenzbasierte Medizin e. V.

Deutsches Netzwerk Evidenzbasierte Medizin e. V.

24. - 26.02.2021, digital

A new method for testing reproducibility in systematic reviews was developed but needs more testing

Meeting Abstract

Search Medline for

  • Dawid Pieper - Universität Witten/Herdecke, Institut für Forschung in der Operativen Medizin (IFOM), Deutschland
  • Simone Hess - Universität Witten/Herdecke, Institut für Forschung in der Operativen Medizin (IFOM), Deutschland
  • Clovis Mariano Faggion jr. - Universitätsklinikum Münster, Poliklinik für Parodontologie und Zahnerhaltung, Münster, Deutschland

Who cares? – EbM und Transformation im Gesundheitswesen. 22. Jahrestagung des Deutschen Netzwerks Evidenzbasierte Medizin. sine loco [digital], 24.-26.02.2021. Düsseldorf: German Medical Science GMS Publishing House; 2021. Doc21ebmPS-6-02

doi: 10.3205/21ebm094, urn:nbn:de:0183-21ebm0944

Published: February 23, 2021

© 2021 Pieper et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

Background/research question: Systematic reviews (SRs) are essential to inform evidence-based decision making in health care across different groups such as clinicians, patients and policy makers. It was found that reproducible research practices are uncommon in SRs, and thus limiting the possibility of testing for reproducibility. However, to the best of our knowledge up to date no testing of the whole SR instead of single steps has been conducted. Therefore, we set out to develop and execute a strategy to test for reproducibility in a SR. Our strategy comprised the reproducibility of the following steps of a SR: search, selection, data extraction and risk of bias (RoB) assessment.

Methods: We have developed an approach to test reproducibility retrospectively. Our strategy was tested on a random chosen SR. We replicated the literature searches and drew a 25% random sample followed by study selection, data extraction, and risk of bias (RoB) assessments performed by two reviewers independently. These results were compared narratively with the original review.

Results: We were not able to fully reproduce the original search resulting in minor differences in the number of citations retrieved. To a larger extent, we found differences in study selection, including studies that should have been included according to the eligibility criteria, while other should not be included. We found only one disagreement in the extracted data that was unlikely to have an impact on the review´s conclusion. The most difficult section to be reproduced was the RoB assessment due to the lack of reporting clear criteria to support the judgement of RoB ratings.

Conclusion: Our approach resembles a post-publication review that is performed in a structured way. Thus, reproducibility tests can become a part of such post-publication reviews and allow the original review authors to improve on their review in terms of reporting and methodological quality. An essential step in reproducing SRs is that SR authors make all of their data accessible. This will allow reproducibility and increase the credibility of SRs. Our approach as well as other approaches needs to undergo testing and comparison in the future as the area of testing for reproducibility of SRs is still in its infancy.

Competing interests: The authors declare to have no conflicts of interest.