gms | German Medical Science

68. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (GMDS)

17.09. - 21.09.23, Heilbronn

Explaining the optimistic performance evaluation of newly proposed methods: A cross-design validation experiment

Meeting Abstract

  • Christina Nießl - Institute for Medical Information Processing, Biometry, and Epidemiology, LMU Munich, München, Germany; Munich Center for Machine Learning (MCML), München, Germany
  • Sabine Hoffmann - Institute for Medical Information Processing, Biometry, and Epidemiology, LMU Munich, München, Germany; Department of Statistics, LMU Munich, München, Germany
  • Theresa Ullmann - Institute for Medical Information Processing, Biometry, and Epidemiology, LMU Munich, München, Germany
  • Anne-Laure Boulesteix - Institute for Medical Information Processing, Biometry, and Epidemiology, LMU Munich, München, Germany

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie. 68. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (GMDS). Heilbronn, 17.-21.09.2023. Düsseldorf: German Medical Science GMS Publishing House; 2023. DocAbstr. 58

doi: 10.3205/23gmds077, urn:nbn:de:0183-23gmds0777

Veröffentlicht: 15. September 2023

© 2023 Nießl et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

The constant development of new data analysis methods in many fields of research is accompanied by an increasing awareness that these new methods often perform better in their introductory paper than in subsequent comparison studies conducted by other researchers. We attempt to explain this discrepancy by conducting a systematic experiment that we call “cross-design validation of methods”. In the experiment, we select two methods designed for the same data analysis task, reproduce the results shown in each paper, and then reevaluate each method based on the study design (i.e., data sets, competing methods, and evaluation criteria) that was used to show the abilities of the other method. We conduct the experiment for two data analysis tasks, namely cancer subtyping using multi-omic data and differential gene expression analysis. Three of the four methods included in the experiment indeed perform worse when they are evaluated on the new study design, which is mainly caused by the different data sets. Apart from illustrating the many degrees of freedom existing in the assessment of a method and their effect on its performance, our experiment suggests that the performance discrepancies between original and subsequent papers may not only be caused by the nonneutrality of the authors proposing the new method but also by differences regarding the level of expertise and field of application. Authors of new methods should thus focus not only on a transparent and extensive evaluation but also on a comprehensive method documentation that enables the correct use of their Methods in subsequent studies.

The authors declare that they have no competing interests.

The authors declare that an ethics committee vote is not required.

This contribution has already been published: [1]


References

1.
Nießl C, Hoffmann S, Ullmann T, Boulesteix AL. Explaining the optimistic performance evaluation of newly proposed methods: A cross-design validation experiment. Biom J. 2023 Mar 31:e2200238. DOI: 10.1002/bimj.202200238 Externer Link