gms | German Medical Science

51. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (gmds)

10. - 14.09.2006, Leipzig

Publication bias in medical informatics evaluation literature: Recognizing the problem, its impact and the causes

Meeting Abstract

  • Christof Machan - Institute for Health Information Systems, UMIT – University for Health Sciences, Medical Informatics and Technology, Hall
  • Elske Ammenwerth - Institute for Health Information Systems, UMIT – University for Health Sciences, Medical Informatics and Technology, Hall
  • Thomas Bodner - Department of Biological Psychiatry, Innsbruck Medical University, Innsbruck
  • Nicolette de Keizer - Department of Medical Informatics, Academic Medical Center, Universiteit van Amsterdam, Amsterdam

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (gmds). 51. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie. Leipzig, 10.-14.09.2006. Düsseldorf, Köln: German Medical Science; 2006. Doc06gmds129

Die elektronische Version dieses Artikels ist vollständig und ist verfügbar unter: http://www.egms.de/de/meetings/gmds2006/06gmds227.shtml

Veröffentlicht: 1. September 2006

© 2006 Machan et al.
Dieser Artikel ist ein Open Access-Artikel und steht unter den Creative Commons Lizenzbedingungen (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.de). Er darf vervielf&aauml;ltigt, verbreitet und &oauml;ffentlich zug&aauml;nglich gemacht werden, vorausgesetzt dass Autor und Quelle genannt werden.


Gliederung

Text

Introduction

Publication bias is a phenomenon that has probably existed as long as results of scientific research are being published. In brief, it is defined as the publication or non-publication of results depending on their direction and statistical significance. Positive and / or statistically significant results seem more likely to be published than negative and / or insignificant results [1]. Publication bias has been a well known issue for at least decades (if not centuries) and its effect steadily becomes more important with the paradigmatic change in the way decisions are being made: from the basis of personal experience to the solid foundation of scientific evidence [2]. In order to gain knowledge on a subject as comprehensive and „objective“ as possible, the results of primary studies may be combined in systematic reviews or meta-analyses. Greenhalgh [3] defines a systematic review as an overview of primary studies using explicit and reproducible methods, whereas a meta-analysis is described as the „mathematical synthesis of results of two or more primary studies that addressed the same hypothesis in the same way“.

A number of studies investigating the effects of publication bias have been carried out , some also in the field of biomedical informatics, giving an overview of its existence and risk factors [2] or assessing to what degree the direction and significance of results influence publication [4], [5]. However, there still seems to be no unanimity as regards the degree of influence of publication bias in medical informatics evaluation literature. We see the assessment of this influence as a basis for the answer to the question if and how publication bias in our field should be dealt with.

In order to assess the existence of publication bias and its influence on meta-analyses we conducted a small-scale study to answer three study questions:.

1.
What is the ratio of positive vs. negative findings in published evaluation studies?
2.
Is a statistical assessment of publication bias for evaluation studies in health informatics possible?
3.
How many evaluation studies conducted are published, and what are the reasons for not submitting?

Methods

1: What is the ratio of positive vs. negative findings in published evaluation studies?

If there is a great influence of publication bias in the population one should assume that a significantly higher number of studies describing positive results than negative results have been published. Comparable statistics were presented by [2]. In order to answer that question a random sample of 86 references was drawn out of a database hosted by the Institute for Health Information Systems at the UMIT. The database comprises 1.035 references to papers published between 1982 and 2002 in the field of medical informatics evaluation research (http://evaldb.umit.at) [6] For every abstract it was investigated if the authors found unambiguous results and if so, if they were positive or negative as regards the hypothesis the publication was based on.

2: Is a statistical assessment of publication bias for evaluation studies in health informatics possible?

A funnel plot is a simple way to depict the phenomenon of publication bias in a sample of studies. It can briefly be described as a scatter plot classifying every single study by effect size and study quality. The assumption is that fewer studies with low quality and low effect size or describing a negative effect are published. The plot can visualize this sign of publication bias [7]. We decided to try to draw a funnel plot for evaluation studies on CPOE (computerized physician order entry); CPOE evaluation has been in the center of interest recently, as the potential of such systems to significantly improve patient care or to even save lives has been described frequently (e.g. [8])

In consequence we had to find a certain number of studies evaluating CPOE systems, measuring the same effect and providing quantitative data for effect size and study quality. We searched the database already described (http://evaldb.umit.at) as well as PubMed on quantitative studies on CPOE. Bibliographies of reviews found were further searched for relevant references. Overall, about 140 CPOE studies were found. All abstracts found were categorized concerning the effect measured (as e.g. costs, time consumption, appropriateness of care, ADE).

3: How many evaluation studies conducted are published, and what are the reasons for not submitting?

The question was answered in two steps: A written survey was sent out by e-mail to members of EFMI (European Federation of Medical Informatics) and IMIA (International Medical Informatics Association) working groups on evaluation and of first authors of evaluation papers from the last 5 years (found through a comprehensive PubMed query). Moreover the survey was sent to CIOs and other persons responsible for IT management in hospitals in Austria, Germany, the Netherlands and Switzerland. Participants were asked to report on evaluation studies conducted in the last 5 years, whether they were published, and on reasons for not publishing.

Results

1: What is the ratio of positive vs. negative findings in published evaluation studies?

In our random sample of 86 abstracts, 60 (69.8%) showed positive results, 12 (14%) negative and 14 (16.3%) showed mixed or neutral results and could therefore not be attributed to either of these categories. Typical phrases found in abstracts indicating positive results were e.g. (depending on the outcome variable measured): “is a useful system for improving”, “time showed significant reductions”…; a typical phrase found to describe negative results was e.g. “the system was less accurate than”; A typical indication that an abstract could not be attributed to either of the categories was that no comparison could be made, that no conclusions could be drawn, that positive and negative results were balanced or that no difference could be found.

2: Is a statistical assessment of publication bias for evaluation studies in health informatics possible?

Searching and categorization of publications is currently being done. Studies on the effect of CPOE on the appropriateness of care such as its effect on medication errors (around 50 studies) and studies on the effect of CPOE on the outcome of care such as its effect on adverse drug events (around 25 studies) seem the most promising to carry out a meta-analysis Most of these studies describe a positive effect, which makes a funnel plot indicating signs of publication bias likely. Detailed results are to be presented at the conference.

3: How many evaluation studies conducted are published, and what are the reasons for not submitting?

Overall, until the end of March 2006, we got feedback from 106 respondents (mostly from academia). They reported to have conducted 216 studies in the last 5 years, from which 112 (52%) have or will be published in journals or larger conferences, and 104 (48%) have not been published or only in a limited way. As indicated by the respondents reasons for not publishing comprise (multiple nominations possible): generalizability seemed limited (27 nominations; e.g. context seemed too unique), study not yet finished (19), no time for writing (12), results seemed not of interest to others (11, e.g. only for internal use), methods seemed not adequate (10, e.g. sampling seemed insufficient), organizations prohibited publication (10, e.g. confidential information), rejection by journal (6), results too negative (5), no interest in academic output (5), evaluation of first prototype only (4).

Discussion

In our random sample of 86 evaluation studies two thirds of publications describe positive results. This could be seen as an indicator for publication bias. Comparable results have also been presented by Dickersin [2]. However, the survey did not provide explanations, as the answers “findings were too negative to be published” or “organization prohibited publication” were not given very frequently. Reasons for not publishing rather seem to lie in the setting of the study (e.g. constructive formative evaluation only for internal use to improve own system, no scientific news expected), or simply in a lack of time for publication. As regards the statistical assessment of publication bias in meta-analyses it is too early to draw conclusions. An important experience made when categorizing the CPOE references found is that the application systems evaluated (regarding their exact functionality), the effects measured as well as other parameters show remarkable variation. In order to draw a meaningful funnel plot as part of a meta-analysis, it will be important to find a certain number of studies showing a high degree of homogeneity. With regard to differences in the functionality of systems evaluated, in the settings and criteria of evaluation as well as in other parameters such as workflow this seems one of the major obstacles to the accomplishment of a meaningful meta-analysis and therefore for the development of evidence-based health informatics. Further research is necessary to investigate the effect of this heterogeneity on the results of meta-analysis in medical informatics.


References

1.
Rothstein H, Sutton A, Borenstein M. Publication Bias in Meta-Analysis. In: Rothstein H, Sutton A, Borenstein M, editors. Publication Bias in Meta-Analysis, prevention, assessment and adjustments. Chichester: Wiley; 2005. p. 1-7.
2.
Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA 1990;263(10):1385-1389.
3.
Greenhalgh T. How to read a paper: Papers that summarise other papers (systematic reviews and meta-analyses). BMJ 1997;315(7109):672-675.
4.
Dickersin K, Min YI, Meinert CL. Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. Jama 1992;267(3):374-8.
5.
Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, Zhu Q, Reiling J, Pace B. Publication bias in editorial decision making. Jama 2002;287(21):2825-8.
6.
Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care: Trends in evaluation research 1982 - 2002. Methods Inform Med 2004.
7.
Sterne J, Becker B, Egger M. The Funnel Plot. In: Rothstein H, Sutton A, Borenstein M, editors. Publication Bias in Meta-Analysis, prevention, assessment and adjustments. Chichester: Wiley; 2005. p. 75-99.
8.
Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, Burdick E, Hickey M, Kleefield S, Shea B, Vander Vliet M, Seger DL. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. Jama 1998;280(15):1311-6.