gms | German Medical Science

GMS Journal for Medical Education

Gesellschaft für Medizinische Ausbildung (GMA)

ISSN 2366-5017

Beyond the Impact Factor – What do alternative metrics have to offer?

Editorial Alternative Metrics

Search Medline for

  • corresponding author Götz Fabry - Albert-Ludwig-Universität Freiburg, Abt. für Med. Psychologie, Freiburg/Brg, Germany; GMS Journal for Medical Education, Assistant Chief Editor, Erlangen, Germany
  • corresponding author Martin R. Fischer - Klinikum der Universität München, Institut für Didaktik und Ausbildungsforschung in der Medizin, München, Germany; GMS Journal for Medical Education, Chief Editor, Erlangen, Germany

GMS J Med Educ 2017;34(2):Doc27

doi: 10.3205/zma001104, urn:nbn:de:0183-zma0011042

This is the English version of the article.
The German version can be found at: http://www.egms.de/de/journals/zma/2017-34/zma001104.shtml

Received: April 26, 2017
Revised: April 26, 2017
Accepted: April 26, 2017
Published: May 15, 2017

© 2017 Fabry et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Editorial

Some of our readers might have already noticed that lately some of the articles in the JME are marked with a “donut,” a ring composed of colored, intertwined rings around a number at its center. It is the emblem of Altmetric.com [http://www.altmetric.com] a company named after the general term for indicators measuring the dissemination of scientific literature beyond the Impact Factor (“alternative metrics” or “altmetrics”) [1], [2]. The donut indicates which online media refer to the respective article. The greater the number of colors in the ring, the greater the number of different media linked to the article. General news sites and newspapers, the scientific bibliographic platform Mendeley, sites for post-publication peer review (e.g. Publons, see below), references in Scopus (a bibliographic database run by Elsevier), Wikipedia, blogs, social media like Facebook and Twitter, YouTube, and a multitude of other resources are analyzed. The number in the center of the ring is the “attention score,” a weighted measure to represent the coverage of the respective article in the media analyzed (for more details on how the score is calculated, see https://goo.gl/jSLn1Y and [3]). A click on the emblem leads to a website that reports the exact details and the geographical distribution of the media-related activities.

What is the relevance of this kind of analysis and of the alternative score? First of all, it expresses a change in scientific communication, albeit a very slow and time-delayed one [4]. Currently, the productivity of scientists and the quality of their work are mainly measured by their publications. Despite the fact that the digital revolution permeates all aspects of our daily lives, the form of these publications has remained remarkably constant. Scientific evidence is still published in journal articles that are, in many cases, still organized in volumes and issues with page numbers even though they are only rarely printed on paper, at least in medicine and the natural sciences. Furthermore, the system of gratification for scientific achievement relies on these structures, too. The number of articles with very high impact remains of crucial importance for a career in science. To date, this impact is measured almost exclusively by the Impact Factor, a measure that specifies how often articles in a given journal are referenced by other scientific journals within the previous two years [5]. The use of the Impact Factor as a means to evaluate research quality especially with regard to individuals has been increasingly criticized. The San Francisco Declaration on Research Assessment (DORA), launched in 2012 and so far signed by more than 500 organizations and 12,000 scientists, states that the Impact Factor and similar journal metrics should not be used to assess the quality of individual articles or their authors, nor for decisions on hiring, promotion and tenure [http://www.ascb.org/dora/], [6]. However, there is no sign that a fundamental change will take place in the foreseeable future, despite the fact that a number of alternative strategies exist regarding the different aspects of traditional publishing conventions.

In recent years, for instance, “post-publication peer review” (PPPR) has taken root as an alternative to the usual “pre-publication peer review” [7]. The reasons are multifaceted. First of all, there is almost no evidence that traditional peer review actually increases the quality of manuscripts, in part due to the methodological challenges that can handicap research [8]. A recently published study examined this issue by looking at journals of the BMC platform since these journals publish an article’s “pre-publication history,” meaning all reviews and author responses [9]. The results show that the reviewers made few suggestions for changes overall. While the majority of these suggestions improved the quality of the manuscripts, there were also some suggestions that decreased the quality. Moreover, the reviewers missed many flaws and errors in the manuscripts that would have been relatively easy to detect by using simple tools such as a checklist based on the CONSORT statement [10]. Further criticism refers to delayed publication caused by the pre-publication peer review and to the fact that the review process usually involves only two or three reviewers whose expertise is not always known or transparent [7].

PPPR is intended to overcome these weaknesses. An example from the field of medical education is “AMEE MedEdPublish” [www.mededpublish.org] which went online very recently. Manuscripts are published immediately without prior review or, as is the case with MedEdPublish, after a more formal check by the editorial office. Readers review the manuscripts after publication. While this procedure generally can take many forms, MedEdPublish allows anyone who is registered on the platform to review and rate the article. The hope behind this rationale is that by involving a potentially unlimited number of reviewers this will increase the reliability and validity of the review. However, there is also a panel of official reviewers whose judgment is essential on whether or not an article receives a recommendation. It is the intention of the initiators that this recommendation will then lead to referencing the respective article in PubMedCentral which, in turn, would put it on par with publications from PubMed-indexed journals with traditional peer review. In some places, at least, this would also make it count toward career advancement.

While this kind of PPPR is intended to replace the traditional peer review, the term also encompasses all kind of criticism and comments relating to an article after publication, even when the article has undergone the usual peer review. Publication scandals that occur time and again make it quite clear that this is indeed necessary. Flaws in publications on stem cells in highly ranked journals, for instance, were discovered by well-established bloggers and resulted in a retraction of the affected papers [11]. Letters to the editor are in fact a long-standing possibility to criticize or comment on an article but compared to the potential of the internet, this type of scientific communication seems rather old-fashioned. If letters are published, they are often released with considerable delay and they do not always lead to a reply by the authors, let alone a reaction by the journals who – on top of it all – have a conflict of interest when it comes to publishing critical letters [12].

In light of this, it seems obvious that the internet be used to critique and discuss scientific publications. However, what appears to be self-evident is not as trivial as it may seem: where and how, exactly, should these discussions and commentaries take place? How, for instance, to avoid the well-known difficulties of communication within social networks, e.g. polemics or hyper-criticism under the cover of anonymity or fake profiles? Who is going to participate in this discussion? What kind of motives will prompt the participants?

With regard to the site of the discussion, solutions are already apparent in the form of social networks for scientists, such as ResearchGate [www.researchgate.net] and Academia [www.academia.edu]. On these platforms scientists can share their publications, read and comment on the publications of others, and engage in discussions on different topics, as in other internet forums. On ResearchGate these activities, along with the responses they trigger, are registered by calculating a specific measure (“RG score”) that – at least in the company’s view – mirrors the reputation of the individual scientist within the platform and, perhaps, even beyond. Studies examining ResearchGate revealed that, for now, the social and network-related functions are rarely used except to share scientific articles (although these have to be uploaded first which is not without problems in terms of copyright) [13]. An interesting question in this regard is whether the number of “reads” on ResearchGate (which is also one of the components of the RG score) correlates with other measures, for instance, the number of citations. A recent study showed that “younger” articles outnumber “older” articles on the platform, and that younger articles tend to be read more often. A comparison of reads on ResearchGate with the number of citations in the database Scopus and the number of readers on the bibliographic platform Mendeley resulted in rather low correlations which, according to the authors of the study, might indeed indicate different target audiences [14]. This could mean that the different measures of the different media and platforms do indeed measure different aspects regarding the dissemination of and the response to science which would justify their respective uses [15].

However popular ResearchGate, Academia and Mendeley might be by now, it remains unsatisfactory from a scientific perspective that all these platforms are run by privately owned companies pursuing their own opaque, commercial agendas. In this context another initiative appears to be much more promising: in 2013 PubMed launched “PubMed Commons” a project that is open to everyone who has authored at least one publication listed in PubMed. The service can be used via the free individual NCBI account. This opens up the possibility to comment on each article indexed in PubMed. The box for the comments is placed right under the article abstract in PubMed, and the comments are visible to everyone who retrieves the article in PubMed. Thus, all articles of the JME (and the ZMA) published since 2005 are also open for comments and discussion in this manner. Evidently, this option has not yet been used, and generally the intensity of the exchange on PubMed Commons is rather modest. The reasons for this are obvious: due to a severe lack of time, only very few scientists will feel the necessity to become active, especially since there is no reward terms of career advancement aside from the fulfillment of idea-driven motives and scientific interest that might pay off it. Additional reasons that are generally known from social networks might also be relevant. Particularly scientists at the beginning of their career might question whether publicly criticizing a paper written by an established colleague might impair future career opportunities. In addition, social dynamics might have a negative impact in terms of stereotyping or gender bias [16]. These considerations illustrate that PPPR, too, might have drawbacks which is one reason why the JME currently retains the blinded pre-publication peer review.

At least with regard to the reward for preparing a scholarly peer review (pre- and post-publication), solutions are also being developed. Reviewers can create an account on the “Publons” portal [publons.com] to publicly document their activity as reviewers. Since a number of big publishers (Springer, Thieme, BMJ, etc.) support the portal, reviewers are asked to document their reviews on Publons as part of the routine reviewing process. Depending on the type of contract with the journals, different modes of documentation exist ranging from simple and anonymous documentation of the number of completed reviews to the full-text documentation of the review itself. Even if a publisher or journal has no contract with Publons yet, as is currently the case for JME, reviews can still be documented quantitatively. Publons has also installed an index measuring the quantity and quality of the reviews. It is the idea of the Publon founders that this measure might pay off in terms of career advancement or at least academic recognition [17].

This cursory overview on some of the online activities that contribute to the Altmetric donut and score makes it clear that a multitude of opportunities for public scientific communication exist beyond journal articles and the heavily criticized Impact Factor. Currently, their potential has not yet been fully explored, and their significance cannot be fully appraised [18]. However, it seems necessary and reasonable to track these activities and to actively contribute to them as far as individual prospects and resources allow. In this light, we invite all of our authors, reviewers and readers to refer to their work in alternative media and networks to intensify our professional discourse and, not least, to increase the attention given to the JME [19].


Competing interests

The authors declare that they have no competing interests.


References

1.
Buschman M, Michalek A. Are alternative metrics still alternative? Bull Ass Inform Sci Technol. 2013;39(4):35-39. DOI: 10.1002/bult.2013.1720390411 External link
2.
Chisolm MS. Altmetrics for medical educators. Acad Psychiatry. 2016. DOI: 10.1007/s40596-016-0639-3 External link
3.
Trueger NS, Thoma B, Hsu CH, Sullivan D, Peters L, Lin M. The altmetric score: a new measure for article-level dissemination and impact. Ann Emerg Med. 2015;66(5):549-553. DOI: 10.1016/j.annemergmed.2015.04.022 External link
4.
Bartling S, Friesike S. Towards another scientific revolution. In: Bartling S, Friesike S (Hrsg). Opening science - The evolving guide on how the internet is changing research, collaboration and scholarly publishing. Heidelberg: Springer Open; 2014. S.3-15. https://doi.org/10.1007/978-3-319-00026-8_1 External link
5.
Garfield E. The history and meaning of the Journal Impact Factor. JAMA. 2006;295(1):90-93. DOI: 10.1001/jama.295.1.90 External link
6.
Alberts B. Impact Factor Distortions. Science. 2013;340(6134):787. DOI: 10.1126/science.1240319 External link
7.
Teixeira da Silva JA, Dobránszki J. Problems with traditional science publishing and finding a wider niche for post-publication peer review. Account Res. 2015;22(1):22-40. DOI: 10.1080/08989621.2014.899909 External link
8.
Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007;(2):MR000016. DOI: 10.1002/14651858.mr000016.pub3 External link
9.
Hopewell S, Collins GS, Boutron I, Cook J, Shanyinde M, Wharton R, Shamseer L, Altman DG. Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. BMJ. 2014;349:g4145. DOI: 10.1136/bmj.g4145 External link
10.
Schulz AF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med. 2010;8:18. DOI: 10.1186/1741-7015-8-18 External link
11.
Knoepfler P. Reviewing post-publication peer review. Trends Genet. 2016;31(5):221-223. DOI: 10.1016/j.tig.2015.03.006 External link
12.
Schriger DL, Altman DG. Inadequate post-publication review of medical research. BMJ. 2010;341:c3803. DOI: 10.1136/bmj.c3803 External link
13.
Nicholas D, Herman E, Jamali H, Rodríguez-Bravo B Boukacem-Zeghmouri C, Dobrowolski T, Pouchot S. New ways of building, showcasing, and measuring scholarly reputation. Learn Publ. 2015;28(3):169-183. DOI: 10.1087/20150303 External link
14.
Thelwall M, Kousha K. ResearchGate articles: age, discipline, audience size and impact. J Ass Inform Sci Technol. 2017;68(2):468-479. DOI: 10.1002/asi.23675 External link
15.
Costas R, Zahedi Z, Wouters P. Do "altmetrics" correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. J Ass Inform Sci Technol. 2015;66(10):2003-2019. DOI: 10.1002/asi.23309 External link
16.
Bastian H. A stronger post-publication culture is needed for better science. PLOS Med. 2014;11(12):e1001772. DOI: 10.1371/journal.pmed.1001772 External link
17.
van Noorden R. The scientists who get credit for peer review. Nature News 9.10.2014; http://www.nature.com/news/the-scientists-who-get-credit-for-peer-review-1.16102 DOI: 10.1038/nature.2014.16102 External link
18.
Bornmann L. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. J Infometrics. 2014;8:895-903. DOI: 10.1016/j.joi.2014.09.005 External link
19.
Kwok R. Altmetrics make their mark. Nature. 2013;500:491-493. DOI: 10.1038/nj7463-491a External link