Artikel
The accuracy of single- versus dual-reviewer abstract screening: a crowd-based randomized controlled trial
Suche in Medline nach
Autoren
Veröffentlicht: | 12. Februar 2020 |
---|
Gliederung
Text
Background/research question: To determine the accuracy of single-reviewer screening in correctly classifying abstracts as relevant or irrelevant for literature reviews.
Methods: Crowd-based, parallel-group randomized controlled trial.
Using computer-generated simple randomization, Cochrane Crowd platform assigned participants to 100 abstracts of a pharmacological or a public health topic. After completing a training exercise, participants screened abstracts without time restrictions online within the Cochrane Crowd platform. They classified abstracts as relevant or irrelevant based on pre-defined inclusion and exclusion criteria. For the pharmacological topic, only randomized controlled trials were eligible; for the public health topic, randomized and non-randomized studies met inclusion criteria.
Results: Two hundred and eighty participants made 24,942 screening decisions on 2000 randomly selected abstracts. On average, each abstract was screened 12 times. The majority of participants (74%) rated their experience with literature screening for systematic reviews as very good or good. Overall, single-reviewer abstract screening missed 12 percent of relevant studies (sensitivity: 88.4%; 95% confidence interval [CI], 83.6% to 91.9%). By comparison, dual-reviewer abstract screening missed 2 percent of relevant studies (sensitivity: 97.8%; 95% CI, 95.5% to 99.0%). The corresponding specificities were 80.1% (95% CI, 77.9% to 82.0%) for single-reviewer screening and 70.4% (95%CI, 73.3% to 67.4%) for dual-reviewer screening. Single-reviewer abstract screening missed more relevant studies for the public health topic than the pharmacological topic (16% vs. 9%). Regression analyses detected no statistically significant impact of native speaker status, domain knowledge, or experience with literature reviews on the correctness of decisions.
Conclusion: Single-reviewer abstract screening is suboptimal for systematic reviews. Institutions, which use single reviewers to screen abstracts for systematic reviews, should reconsider this approach. Single-reviewer abstract screening, however, might be a viable option for rapid reviews, which deliberately lower methodological standards to provide decision-makers with accelerated evidence synthesis products.
Competing interests: none