gms | German Medical Science

51. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (gmds)

10. - 14.09.2006, Leipzig

Practical experiences on the necessity of external validation

Meeting Abstract

Suche in Medline nach

  • Inke R. König - Institut für Medizinische Biometrie und Statistik, Universtitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck
  • Andreas Ziegler - Institut für Medizinische Biometrie und Statistik, Universtitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (gmds). 51. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie. Leipzig, 10.-14.09.2006. Düsseldorf, Köln: German Medical Science; 2006. Doc06gmds017

Die elektronische Version dieses Artikels ist vollständig und ist verfügbar unter: http://www.egms.de/de/meetings/gmds2006/06gmds121.shtml

Veröffentlicht: 1. September 2006

© 2006 König et al.
Dieser Artikel ist ein Open Access-Artikel und steht unter den Creative Commons Lizenzbedingungen (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.de). Er darf vervielfältigt, verbreitet und öffentlich zugänglich gemacht werden, vorausgesetzt dass Autor und Quelle genannt werden.


Gliederung

Text

In developing a prognostic model, data dependent methods are usually utilized to optimize the fit in the data at hand. For example, using support vector machines, suitable function parameters for the kernel functions need to be selected. Similarly, the parameters in a logistic regression model are optimally fit to the data. This data dependent optimization increases the risk of overfitting which is even increased with small sample sizes [1].

To estimate the extent of overfitting, a prognostic model often is internally validated. Here, the same data set is artificially split into separate samples in which model development and testing can be performed. Prominent examples include tenfold cross-validation or bootstrapping. By contrast, a stringent model validation requires at least two independent data sets. Depending on the second data set, temporal and external validation are distinguished [2]: temporal validation is based on data points that have been collected at the same centers but at a later time point, external validation uses data points from different centers.

In the presentation, different techniques for validating a prognostic model are described and discussed. Prognostic models that were developed using a number of ensemble methods are compared with regard to differences in temporal and external validation. For this, exemplary data sets to predict functional independence in stroke patients are utilized [3]. Our results demonstrate that classical internal validation techniques are insufficient to estimate prediction quality in temporal validation data. Furthermore, temporal validation only poorly predicts the external generalizability of the model.


References

1.
Harrell FE, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med. 1996; 15: 361-87.
2.
Altman DG, Royston P. What do we mean by validating a prognostic model? Stat Med. 2000; 19: 453-73.
3.
The German Stroke Collaboration. Predicting outcome after acute ischemic stroke: An external validation of prognostic models. Neurology. 2004; 62: 581-85.