gms | German Medical Science

GMS German Plastic, Reconstructive and Aesthetic Surgery – Burn and Hand Surgery

Deutsche Gesellschaft der Plastischen, Rekonstruktiven und Ästhetischen Chirurgen (DGPRÄC)
Deutsche Gesellschaft für Verbrennungsmedizin (DGV)

ISSN 2193-7052

The science of scientific evaluation

Editorial

Suche in Medline nach

  • corresponding author Peter M. Vogt - Klinik für Plastische, Hand- und Wiederherstellungschirurgie, Medizinische Hochschule Hannover, Germany

GMS Ger Plast Reconstr Aesthet Surg 2014;4:Doc07

doi: 10.3205/gpras000026, urn:nbn:de:0183-gpras0000268

Veröffentlicht: 8. Oktober 2014

© 2014 Vogt.
Dieser Artikel ist ein Open Access-Artikel und steht unter den Creative Commons Lizenzbedingungen (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.de). Er darf vervielfältigt, verbreitet und öffentlich zugänglich gemacht werden, vorausgesetzt dass Autor und Quelle genannt werden.


Gliederung

Editorial

Medical research and its publications are in the constant focus not only of science itself but also of political debates, concerns of financing, quality of treatment, relevance and future impact. As the number of publications is growing exponentially and access is unlimited due to the web based availability the need for validation and evaluation is still expanding.

During the last 15–20 years the value of scientific publications was more and more validated with the help of the Journal Impact Factor (IF) – especially in the German system. The IF is a numerical score that measures the average number of citations for papers published in a particular journal. This method claims to rank the importance of scientific journals [1] and has caused an unprecedented conditioning of researchers to publish their work particularly in journals with a high Journal Impact Factor. Frequently, journals even advertised the increase of their IF to attract more high quality manuscripts. An increasing IF driven by submission of publications driving the IF led to a “vicious circle” within the scientific community.

The Journal Impact Factor originally was generated in order to provide librarians with a tool to identify journals to purchase. Its use as a measure of the scientific quality of research in a scientific paper has therefore been highly controversial and it became obvious that the Journal Impact Factor has a number of well documented deficiencies as a tool for research assessment [2].

In their evaluation sytems medical university faculties more and more utilized the IF for scientific graduations, academic appointments and the allocation of research funds. IF had become an integral part and turned into a new “currency” of scientific evaluation. It appeared objective, resilient and easy to use. However, the surgical specialties with their historically grown journals with a focus on purely operative aspects of medical treatment topics recognized a significant drawback and under-evaluation of their research compared to non-surgical specialties and basic research disciplines.

Vice versa, the impact factors do not prevent fraud and plagiarism even in the highest ranking publications.

In their San Francisco Declaration on Research Assessment. Putting science into the assessment of research a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA on December 16, 2012 and developed a set of recommendations.

They stated: „There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties“.

In particular they recommended that there is

  • a need to eliminate the use of journal based metrics, such as Journal Impact Factors, in funding, appointment and promotion considerations,
  • a need to assess research on its own merits rather than on the basis of the journal in which the research is published,
  • a need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

As a general rule they recommended not to use journal based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions [2].

Now the German Association of Medical Societies has formulated their assessment of the Journal Impact Factor in an open access publication: “Evaluation of medical research performance – position paper of the Association of the Scientific Medical Societies in Germany (AWMF)” [3]. The consented version that has been published in GMS German Medical Science – an Interdisciplinary Journal is based on an intensive discussion among international and national experts and representatives of scientific organizations. First, during the Berlin Forum of the Association of the Scientific Medical Societies in Germany (AWMF), held in Berlin on October 18, 2013, international experts presented data on methods for evaluating medical research performance. This was followed by further discussions among representatives and within three writing groups of a first draft of the article which then resulted in a consented version discussed here [3].

The Association of the Scientific Medical Societies in Germany suggests major modifications of the current system of evaluation in which the Journal Impact Factor is not considered to be „an appropriate measure” for evaluating individual publications or their authors. According to their suggestions the evaluation of medical research performance should predominantly be based on explicitly phrased and communicated aims and informed peer review. „The journal impact factor is not an appropriate measure and therefore it shall not be used for evaluating the research performance of individuals or institutions“. The authors recommend replacing the IF as soon as possible with more appropriate indicators of scientific impact on medicine, such as adequately normalized citations rates, the reception by the scientific community and the evaluation of the usefulness for the practice of medicine or the society as a whole.

This, however, requires new tools of evaluation in order to define the scientific “yield” of the amounts of funding, and to evaluate interdisciplinary research, research consortia and multi-author publications.

According to the authors the evaluation of individual research performance should act on three levels:

1.
evaluation of publications (recognized scientific journals with peer review, books, guidelines, citation by guideline recommendations),
2.
active contributions to scientific organizations or boards and editorships,
3.
leadership in organizing scientific conferences.

Before this will be implemented open access journals with commercial and non-commercial background will continue to develop. With the ongoing development of new evaluation tools by the scientific community non-commercial open access journals managed by the scientific societies themselves are an important part of that process.


References

1.
Does the ’Impact Factor’ Impact Decisions on Where to Publish? APS News. 2006 Apr;15(4). Available from: http://www.aps.org/publications/apsnews/200604/impact.cfm Externer Link
2.
San Francisco Declaration on Research Assessment. 2012. Available from: http://www.ascb.org/dora-old/files/SFDeclarationFINAL.pdf Externer Link
3.
Herrmann-Lingen C, Brunner E, Hildenbrand S, Loew TH, Raupach T, Spies C, Treede RD, Vahl CF, Wenz HJ. Evaluation of medical research performance – position paper of the Association of the Scientific Medical Societies in Germany (AWMF). GMS Ger Med Sci. 2014;12:Doc11. DOI: 10.3205/000196 Externer Link