I recently published my first sole-author paper, which was also my first open access publication (plus first preprint). It was a fun side project unrelated to my primary research, comparing the influence of ecology journals using a variety of metrics (like the journal impact factor). The paper was published in Ideas in Ecology and Evolution, a journal I’m really excited about. It’s a great outlet for creative ideas in the field of EcoEvo, plus they have a section on the Future of Publishing, which has explored some exceptionally innovative ideas regarding scientific publishing and peer review.
Hocking, D. J. 2013. Comparing the influence of ecology journals using citation-based indices: making sense of a multitude of metrics. Ideas in Ecology and Evolution, 6(1), 55–65. doi:10.4033/iee.v6i1.4949
Most researchers are at least moderately familiar with the Journal Impact Factor (JIF), the first and most prevalent citation-based metric of journal influence. The JIF represents the average number of citations in a given year to articles in a journal published in the previous 2 years. Despite its prevalence, the JIF has a number of serious problems such as drawing inference from the mean of a HIGHLY skewed distributions (a small minority of articles receive the vast majority of citations in any journal). Other criticisms of the JIF include an insufficient time period and bias among journals because not all articles are included in the denominator of the average, only “substantial” articles, but citations to all articles are included in the numerator. Numerous metrics have been proposed to improve upon the JIF. I compared 11 citation-based metrics for 110 ecology journals.
Journal Impact Factor (JIF), 5-year Journal Impact Factor (JIF5), Scimago Journal Report (SJR), Source-Normalized Impact per Paper (SNIP), Eigenfactor, Article Influence (AI), H-index, contemporary h-index (Hc-index), g-index, e-index, AR-index
The relationship among metrics can be visualized via a plot of principal components analysis. On the left side of the plot are the metrics that are averaged per article, whereas the metrics that group on the right side of the graph are metrics that tend to be higher for journals with higher rates of publication (not explicitly on a per article basis). What is also evident from the PCA plot is that no single metric can encompass all of the multidimensional complexity of scholarly influence among journals. Different metrics can be used to understand different aspects of influence, impact, and prestige.
In addition to whether a metric is on a per-article basis, metrics split philosophically on whether they use network theory or just direct citations. The Eigenfactor, AI, and SJR use variations of the Google PageRank algorithm. This basically means that citations from highly cited journals are worth more than citations from less influential journals.
Overall, I would recommend using Article Influence (AI; available via Web of Science) or alternatively the SCImago Journal Report (SJR; available via Scopus) in place of the JIF when average article influence is of interest. The Eigenfactor is the best metric of the total influence of a journal on science. The Source-Normalized Impact per Paper (SNIP) can be especially useful when comparing journals across disparate fields of research. It corrects for differences in publishing and citation practices among fields of study.
Since review articles tend to get more citations than original research articles on average, journals that publish reviews tend have higher scores across all metrics. Therefore, it’s not surprising that the top ranked ecology journals across most metrics are Annual Review of Ecology, Evolution, and Systematics, Trends in Ecology and Evolution (TREE), and Ecology Letters. A list of some of the journal and metrics are below, but you can find much more information in the original article.