Over the time, many metrics have been developed to compare the influence, prestige or relevance of individual scientific journals. This evaluation can be taken into account when an author is selecting a journal for his publication, as well as for personal performance measurement, having successfully survived the review process for the journal.
These metrics compare the number of publications with the total number of „citable items“ published in a journal, so the following definition is assumed:
The journal that achieves the higher number in this calculation would be considered the "better" one. The best-known index of this kind is the Impact Factor, here the last 2 years are considered. Similar indices are the Immediacy Index (1 year), the 5-year Impact Factor or the Impact per Publication (3 years).
Criticism (see literature):
With the Impact Factor, all citations are weighted equally. In order to bypass this valuation inequality, there are prestige-weighted metrics. With this metrics the influence of the citing journal is taken into account: If a journal receives a citation from an important journal, this citation counts more than one from an insignificant journal. Examples of such metrics are the Eigenfactor or the SCIMago Journal Rank.
The Journal Citation Report from Clarivate Analytics is compiled every year and calculates the Impact Factor, the Eigenfactor, the Immediacy Index, the 5-year Impact Factor and many more that have not been mentioned here. Journals from the Web of Science and their citations are evaluated.
The Journal Metrics, which are based on the Scopus database, are freely available on the web. Included metrics are, among others, the Impact per Publication and the SCIMago Journal Rank.
The Google Scholar Metrics evaluate journals by the h-index. For the definition of this metric, see subsection "Measuring the influence of scholars".
Comparing scientists and their academic achievements can be important for different scenarios. On the one hand, a benchmarking can play a role in recruitment or acquisitions of third-party funding, on the other hand, it can also play a role in the budget allocation process. This section will present general metrics for the performance evaluation of researchers. The following paragraphs will deal with the evaluation of individual publications.
There are a number of metrics that are easy to specify. First, the number of publications can be interpreted as a measure of productivity. Further, the impact can again be measured by the number of citations received on all publications or the number of average citations per publication.
Such observations do not take factors into account such as the length of the academic career, the number of authors per publication or the number of self-citations.
The h-index, published by Hirsch in 2005, has become very common. The aim of this metric is to express productivity and influence by one number.
The h we are looking for is the maximum. That means at least h publications have been cited h times. This can be determined graphically by first sorting all publications by the number of citations. Plotted on a graph (see below), the angle bisector gives the h-index. In the example below, h=3.
Although Hirsch already wrote in 2005 that self-citations do not have a major influence on the h-index, factors such as discipline, length of research career or age of publications are not taken into account in the calculation. If a few publications were particularly successful, this is also not reflected in the h-index. Assuming that the most cited article in the example was cited 100 times, this would not change the result h=3.
Point of criticism | Examples of alternative metrics |
---|---|
Most frequently cited publications | g-Index, e-Index, A-Index, R-Index |
Age of publication | Contemporary h-Index, AR-Index |
Number of authors | mostly normalisations |
Lenght of scientific career | See e.g. Harzing (2007) |
There is an author search in databases such as Web of Science or Scopus. Usually, the search results can then be analysed, whereby the h-index is also defined.
Alternatively, the analysis can be done with Google Scholar. If the search for a scientist returns his or her Google profile, this also includes the h-index. The Publish or Perish software uses Google Scholar as the basis for determining the h-index and many alternatives. By using the Google Calculator, a browser add-on, an analysis of various metrics is carried out directly when searching in Google Scholar. In general, the h-index of the same person differs in different databases, since each operator selects the journals to be evaluated. Google Scholar usually delivers the highest h-index.
Of course, the individual publication and its impact in the (scientific) world can also be considered. With the help of such metrics, the most important publications could be determined, e.g. for a third-party funding application.
The simplest metric is based on the citations of a publication, although this is easily manipulated by self-citations. In addition, it takes a relatively long time for the influence to become visible. This is because other scientists must first read the article and cite it in a new paper, whereby this new article must also go through the publication process.
Metrics of the journal in which the article was published are also often used instead e.g. Impact Factor,… . But using the metric of the journal only reveals the "potential" of a publication, i.e. that the article has survived the review process. Of course, a journal with a high impact factor will also have publications with little or no citation.
Alternative metrics are also called article-based metrics, open metrics or altmetrics. The aim is to use the new possibilities of the internet not to take the citation as a measure, but to start earlier. Here, the search, download and discussion on the internet are taken into account.
This applies first of all to the view or download on the internet. The libraries of users can also be evaluated in a reference management software. Studies have shown, for example, that there is a correlation between download numbers and citations.
Furthermore, mentions of the publication can be found in blogs, Twitter, news portals, Wikipedia, discussion platforms, YouTube, GitHub and so on. It was shown here that frequent mentions in blogs do not correlate with a high number of citations, because articles frequently cited in social media are often more popular scientific or topical publications.
Some journals (e.g. PLOS, Nature, AIP, ...) offer metrics directly on the individual pages of the articles that show the influence of the publications on the internet. In some cases, such metrics are also applied to books. The company Altmetric evaluates many internet sources to calculate a score. The result can be visualised more precisely and also analysed. As a free product, one can download a bookmarklet that triggers a search for citations on an article's homepage and displays the score calculated from it.
Other products that deal with this topic are Plum Analytics, Webometric Analyst and Impact Story.
Subject Librarian for computer science, electrical engineering and information technology
IT-coordinator
Tel.: +49 631 205-2806
E-Mail: rosteck@ub.uni-kl.de
O `clock