gms | German Medical Science

Information Retrieval Meeting (IRM 2022)

10.06. - 11.06.2022, Köln

Implementation and evaluation activities to build support for machine learning

Meeting Abstract

  • Heather Melanie R. Ames - Norwegian Institute of Public Health, Norway
  • Tiril Cecile Borge - Norwegian Institute of Public Health, Norway
  • Christine Hillestad Hestevik - Norwegian Institute of Public Health, Norway
  • Jan Peter William Himmels - Norwegian Institute of Public Health, Norway
  • Patricia Sofia Jacobsen Jardim - Norwegian Institute of Public Health, Norway
  • Jose F. Meneses-Echavez - Norwegian Institute of Public Health, Norway; Universidad Santo Tomás, Facultad de Cultura Física, Deporte y Recreación, Bogotá, Colombia
  • corresponding author presenting/speaker Ashley Elizabeth Muller - Norwegian Institute of Public Health, Norway
  • Hong Lien Nguyen - Norwegian Institute of Public Health, Norway
  • Christopher James Rose - Norwegian Institute of Public Health, Norway
  • Stijn Rita Patrick Van De Velde - MAGIC Evidence Ecosystem Foundation

Information Retrieval Meeting (IRM 2022). Cologne, 10.-11.06.2022. Düsseldorf: German Medical Science GMS Publishing House; 2022. Doc22irm01

doi: 10.3205/22irm01, urn:nbn:de:0183-22irm018

Veröffentlicht: 8. Juni 2022

© 2022 Ames et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

During this workshop, the Norwegian Institute of Public Health’s (NIPH) machine learning (ML) implementation team within the Cluster for Reviews and Health Technology Assessments will present our implementation and evaluation activities. Participants will get the chance to dive into key activities that they think might be particularly useful to their own institutions and ask questions about lessons learned.

No knowledge of machine learning is required.

Learning objectives: Learn from, engage with, and provide feedback on NIPH’s implementation of ML within all of our evidence synthesis products.

  • 30 min Introduction: Our ML implementation team sprung from the need to produce more HTAs and systematic reviews, faster, during the COVID-19 pandemic. We introduced and scaled up ML among non-specialists through using off-the-shelf software, beginning with a six-month team whose mandate was to explore potential benefits of ML. We will invite participants to engage with what we have done, and to reflect on how transferrable these strategies could be to their organizations.
  • 40 min World Café: Four facilitators from the ML team will each dive into more detail about key elements of our team’s strategy, using 10 minutes each, and with a focus on engaging participants and answering questions. Participants will rotate through tables in any order they choose such that all participants will receive more in-depth information from every table and may choose to remain at a table they are particularly interested in. Topics will include:
    a) Embedding process and performance evaluations into existing commissioned reviews (i.e. how we “tested” ML functions within ongoing work);
    b) Using a train-the-trainer model of non-specialists;
    c) Trial-and-error experimentation to match various ML functions to the “right” review types and phases, both retrospectively and in ongoing reviews; and
    d) Creating training materials that integrated software use into division workflows, while not duplicating a software provider’s expertise.
  • 20 min Knowledge Exchange: The floor will be open for remaining questions. We will administer online quizzes to explore which strategy elements participants find most relevant to their organization and ask them to guess which were the most and least successful from our point of view. We will conclude by discussing the role of a non-specialist, implementation-focused ML network or working group in providing training to evidence synthesis organizations.

Keywords: machine learning, software, implementation, evaluation