I came to know h only when h was already well-known to many others. And then, one day, an epiphanic moment: the sudden realization that h had intruded on the academic careers of all of us without even asking for consent. Since then, h and I maintain a rather ambivalent relationship. On the one hand, I should have probably liked h, because my private h is nothing to be ashamed of in my scientific discipline. On the other, I think that h undermines proper scientific culture.
The Hirsch index, Hirsch number, H index/number, h-index, or simply h to connoisseurs, was formally born in November 15, 2005, in a paper entitled "An index to quantify an individual's scientific research output", single-authored by J.E. Hirsch from the Department of Physics in UCSD. In one of the tersest and most effective abstracts I have ever encountered in the scientific literature, Hirsch states: "I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher." The rationale is hence straightforward: find a single measure to encapsulate the scientific output of a scientist, to be used for evaluations made in processes of recruitment, promotion, competitive grant allocation, or simply for experiencing naches. The methodology is also amazingly simple: scroll down the list of publications, arranged per times cited from the highest to the lowest, until the paper rank is equal or greater than the citations for that paper. Of course, you do not really have to do it that way, nowadays you can simply select the "create citation report" on ISI Web of Knowledge (Thomson Reuters) or a similar data base, or, for those reluctant to spend even this amount of energy, use one of the "h index calculators" on the web (but beware, some yield bizarre results).
The ingenious simplicity of h - an entire career squeezed into a single, usually double-digit, number - immediately gained much popularity. On a recent search Google returned no less than 641,000,000 results for "h index", and although I admit I didn't scan them all, a quick glance revealed that at least many of them refer indeed to the Hirsch index. The methodological shortcomings of h became apparent immediately, but didn't slow down the infection. Clearly, there is no doubt that Hirsch meant only good. But when an auteur releases a piece of art into the universe, that creation acquires a life of its own (though in this case I suspect that Hirsch foresaw something, dubbing "quantification" of science "potentially distasteful" already in the first paragraph of his paper).
Among the issues brought up from the outset: the dependence of h on the culture of the specific discipline (h values are higher, for example, in molecular biology than in math or psychology); the effect of the size of the sub-discipline and of the research teams; the bias against books, that are cited sparsely in research papers; the difference between archival- and groundbreaking papers that ultimately make it into textbooks; and the context of citation, including, is it refuting the work cited? Theoretically, one could even make an h-index-promoted career by publishing irreproducible results in a catchy field. At the time of writing, h can only grow over time, though on a second thought the idea of procedures to reduce h over time for poetic justice is not entirely irrational. Pitfalls and potential remedies in using h are outlined nicely by various authors, including in Wikipedia.
But in my view, the major drawback is not methodological, but rather conceptual. h boosts instant scientific culture. Using it relieves many of the need to look more closely into (or God forbid, even read) papers of those they attempt to evaluate. It is an epitome of industrialized science. Clearly, high h (hh, following the spirit of instant science and the universe of texting), is not sufficient evidence for high quality science, neither is the lack of hh evidence for lack of strong impact. Some of the most influential scientists I know have a rather meager h within their own discipline, including Nobel laureates. I think that h provides an incentive to avoid devoting the attention needed to really evaluate lasting contributions to science, and we always risk the tendency to find refuge in the easy solutions. Although the reductionist approach was highly successful in promoting modern science, the reduction of careers to a single numerical index is going much too far.
Using h is also in my view an additional incentive to publish too many fragments of papers instead of coherent narratives. The late Max Delbruck, a founding father of modern molecular genetics and biology and Nobel laureate in physiology and medicine in 1969 (with Alfred Hershey), advocated the idea that PhDs should get coupons upon graduation, each to be used for publishing a paper. No unused coupons, no more papers. There was an argument what the number of the coupons should be, but it was never deemed beneficial to set it larger than 30. This could have by definition limited h to a small useless index, forcing authors to publish a number of papers smaller than that they could seriously read, and others to read these papers. But it is too late, probably, to revive the Delbruck principle.
By the way, Hirsch's PNAS, 102: 16569 (2005) was cited so far 903 times, by far the most cited publication of Hirsch, and contributed to his own h index, which is 48, pretty high for physicists.
© Yadin Dudai 2011