Artikel
Pilot testing of ASReview: a machine-learning assisted screening tool for evidence synthesis
Suche in Medline nach
Autoren
Veröffentlicht: | 12. März 2024 |
---|
Gliederung
Text
Background/research question: ASReview is an open-source machine-learning assisted screening tool aiming to improve the systematic review literature screening process ([1], [2]). It reorders records according to their relevance, so the reviewer does not have to randomly screen all records obtained from a systematic search. We implemented ASReview in the screening process for a continuously updated database of cancer trials [3]. The pilot study’s three aims were to assess ASReview’s user-friendliness, effectiveness, and important pros and cons compared to a traditional literature screening setup.
Methods: We tested ASReview on a sample of records retrieved from PubMed (searched May 2023). We were interested in clinical trials assessing one specific type of immunotherapy, tumour-infiltrating lymphocytes. One reviewer screened titles/abstracts and, if necessary, full texts in the same step to make a final eligibility decision. We split the screening process into two phases: first, we screened all records filtered by “Clinical Study” on PubMed (based on the assumption that most of the relevant records would be tagged as such); then we used these records to train ASReview prior to screening the full sample. We decided to stop screening after 100 consecutive irrelevant records.
Results: Our search returned 14,004 records; 604 were tagged as “Clinical Study” and were screened resulting in 77 relevant and 527 irrelevant records. These 604 records were used to train ASReview for screening of the full sample. At the conference, we will present the results from the second screening phase, i.e. total number of relevant and screened records in the full sample.
Conclusion: ASReview is easy to use and it effectively identified relevant records during the first screening phase, although we can’t exclude the possibility of missing relevant records in the pool of unscreened records. ASReview’s most important limitations include that it requires Python to run locally on the computer; one cannot easily add references to an existing project; the reviewer must always make a final decision as there is no option to skip a reference; and there is no default two-screener setup. Compared to a traditional systematic review, where all records are screened in duplicate, there is a big time saving potential. This needs to be weighed against the risk of missing relevant records and ASReview’s simple interface necessitates clearly predefined procedures to enable a transparent and reproducible workflow.
Competing interests: All authors declare no conflict of interest.
References
- 1.
- ASReview team. ASReview website. Utrecht University; [accessed 2023 Oct] Available via: https://asreview.nl/
- 2.
- van de Schoot R, de Bruin J, Schram R, et al. An open source machine learning framework for efficient and transparent systematic reviews. Nat Mach Intell. 2021;3:125–133. DOI: 10.1038/s42256-020-00287-7
- 3.
- Pragmatic Evidence Lab. Cancer Immunotherapy Evidence - a Living Library?. 2023. Available via: https://ciel-library.org/