Skip to Main Content

Research Guides

Research Impact Metrics: Locating, Evaluating, and Using

Author Metrics

In considering the research impact of an individual, it is best to take a holistic view of their contributions that relies more on expert opinion (e.g., internal and external peer review) that is informed and supported by various metrics, but at the same time not dictated or driven by those metrics. A single metric--even if normalized as described below--will never tell the full story. 

The h-index is a flawed metric for various reasons. If using citation-based metrics to assess an individual or their research, it is best to use metrics that are normalized so the field of research is taken into consideration and contextualized. Available normalized author-level metrics to consider are overall percentiles in Web of Science via the Author Impact Beamplot or number of articles in the top 1%,10%, or 20% within their field using the Field Baselines in Essential Science Indicators. See the section at the bottom titled "What's wrong with the h-index?"

When it comes to citation-based metrics and assessing an individual's impact, a better approach is to assess contributions individually (e.g., article level or individual research output) than to use a single overall metric for an individual researcher. 

What's wrong with the h-index?

What is the h-index? The definition of the h-index is that a scholar with an index of h has published h papers, each of which has been cited in other papers at least h times. It is a metric that combines the number of citations and publications. It is shown graphically below. 

Graph of H-index indicating that a higher h-index means you have more citations than you have published papers

By en:user:Ael 2, vectorized by pl:user:Vulpecula, Public Domain
 

So, what's wrong with the h-index?

1. There is no meaningful reason why the number of papers equaling the number of citations should have any special significance. 

2. It combines two metrics (papers and citations) into a single metric that conflates "productivity" and "impact" into a single value with some unusual consequences. For example, three researchers with the same h-index (let's say 10) could have very different publication and citation patterns over their careers. One might have published 10 articles in three years all of which were cited ten or more times. Another published 100 articles over 30 years with 10 being cited ten or more times over 30 years. The third published 25 articles over 30 years with only the first ten articles being cited ten or more times in the first five years and the other 15 articles never cited. All three researchers have an h-index of 10. It may be the same h-index, but these are very different career arcs.

3. Measures typically go up and down in response to changes. Think of a thermometer. In the case of the h-index, it only goes up. It is measuring a high mark. If a researcher's productivity and impact decreases over time, it is not reflected in the h-index as shown above. Given that it takes time to research/publish and garner citations, the h-index favors those further along in their careers.    

4.Unlike percentiles or ratios described earlier, the h-index is not normalized for the field and cannot be compared across different disciplines and sometimes even within some discipline.
 

Further Reading and Viewing

Understanding the h-index [5-minute video, Washington University, Suiter & Sarli, 7/6/2016]

Why the h-index is a bogus measure of academic impact [The Conversation, Gingas & Khelfaoui, 7/10/2020]

What's Wrong with the h-index, according to its inventor [Nature Index, Conroy, 3/2/2020]

Halt the h-Index [Leiden Madtrics, Rijcke, Waltman, & Van Eck, 5/19/2021]

The Inconsistency of the h-Index [ArXiv, Waltman & Van Eck, 8/19/2011]