gms | German Medical Science

Information Retrieval Meeting (IRM 2022)

10.06. - 11.06.2022, Köln

Two roadmaps for using machine learning in evidence synthesis, across disciplines

Meeting Abstract

  • presenting/speaker Christian Kohl - Julius Kühn Institute (JKI) – Federal Research Centre for Cultivated Plants, Germany
  • Heather Melanie R. Ames - Norwegian Institute of Public Health, Norway
  • Maria Heinze - Julius Kühn Institute (JKI) – Federal Research Centre for Cultivated Plants, Germany
  • corresponding author Ashley Elizabeth Muller - Norwegian Institute of Public Health, Norway
  • Bjørn Tommy Tollnås - Norwegian Institute of Public Health, Norway
  • Stefan Unger - Julius Kühn Institute (JKI) – Federal Research Centre for Cultivated Plants, Germany
  • Jose Meneses-Echavez - Norwegian Institute of Public Health, Norway
  • Tiril C. Borge - Norwegian Institute of Public Health, Norway

Information Retrieval Meeting (IRM 2022). Cologne, 10.-11.06.2022. Düsseldorf: German Medical Science GMS Publishing House; 2022. Doc22irm03

doi: 10.3205/22irm03, urn:nbn:de:0183-22irm038

Veröffentlicht: 8. Juni 2022

© 2022 Kohl et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Learning objectives:

  • Gain a plain-language overview of available machine learning (ML) techniques in different steps of systematic reviews.
  • Learn how two organizations approached ML, trust-building, and review digitalization differently, and using different strategies and resources.
  • Brainstorm with each other about how to build trust in ML in their own organizations, or discuss their priorities for automating different phases or types of reviews.
  • Network with others for future collaboration.

Introduction: In the production of systematic reviews, guidelines, and policy advice, ML can contribute to more precise searching, less studies to screen, less manual sorting and data extraction, and even quicker critical appraisal, with more advanced methods in the pipeline.

It is primarily up to health and welfare review organizations to decide which software to use, in which phases of reviews, and how to teach their researchers about ML. The point of departure for agricultural science is different: systematic reviews are gaining speed, and packaging ML along with them has the capacity to be an important selling point.

Workshop methods:

  • 5 min: Online polls to get to know the type of review products participants are involved in, familiarity with ML, attitudes or skepticism, and any particular interest in ML techniques or review phases.
  • 20 min: We will compare and contrast our organizations' and disciplines’ approaches to reviews and ML. In public health (NIPH), ML is often seen as disruptive, introducing uncertainty and error to methodological gold standards. In agricultural science (JKI), systematic review methods are only recently scaling up, and the JKI team uses ML as a major selling point of these reviews to researchers. JKI created their own systematic review software and hired an AI researcher to further develop advanced, but user-friendly techniques, whereas NIPH relies on off-the-shelf products and focuses on implementation evaluation. We are both working towards the same goal, but from very different points of departure, and with different restraints and opportunities.
  • 10 min: An overview for non-specialists of the basics of ML within reviews (text mining, neural networks, etc).
  • 30 min: Participants will select facilitated small groups to join, based on topics they wish to brainstorm with others. The number and sizes of groups will be tailored to workshop size, but we anticipate at least one group on Which steps in the review process can benefit from ML and how? (JKI), focused on ML developments, and one group on How can we build reviewer trust in ML? (NIPH), focused on implementation.
  • 15 min: Each group will summarize their discussions. We will conclude with a discussion of the feasibility of collaboration across fields and propose a future working group of review experts and ML experts across disciplines that could dovetail with ICASR’s current activities.

Keywords: machine learning, implementation, trust, software