This year's intake at the National Academy of Sciences matched the anticipated ranking on the h-index. Credit: M. TEMCHINE

The election procedures of scientific academies are often seen as opaque, clubby and capricious. But Jorge Hirsch, a physicist at the University of California, San Diego, may have found a way to silence those complaints, by inventing a measure of research achievement that, he says, is transparent, unbiased and very hard to rig.

His ‘h-index’ depends on both the number of a scientist's publications, and their impact on his or her peers. As well as determining membership of scientific societies, Hirsch suggests that the method could inform funding or tenure decisions.

“It's a very cute idea,” says Sidney Redner, a physicist at Boston University who has studied scientific citation statistics. He welcomes an alternative to simplistic readings of such statistics, which “leave so much room for misinterpretation”.

Redner also agrees that it would be useful to have an objective criterion for election to bodies such as the US National Academy of Sciences (NAS) or Britain's Royal Society. He has been involved in choosing fellows of the American Physical Society (APS), and says that factors other than research quality inevitably come into play. “If you're not a political person, you don't get nominated,” he says. A systematic criterion such as the h-index “would make the playing field more level”.

The h-index is the highest number of papers a scientist has that have each received at least that number of citations. Thus, someone with an h-index of 50 has written 50 papers that have each had at least 50 citations. This, says Hirsch, is fairer than alternative measures based on publication. Counting total papers, for example, could reward those with many mediocre publications, whereas just counting highest-ranked papers may not recognize a large and consistent body of work.

And it is hard to inflate one's own h-index, for example by self-citation. “You can't fake it,” says Hirsch, because it relies on how a body of work is received over time. “To manipulate an entire career is very hard,” agrees Redner.

Applying the method to physicists certainly seems to pick out influential individuals (see Some of the highest-ranked physicists, by h-index). Top, by a considerable margin, is Ed Witten of the Princeton Institute for Advanced Study, with an h of 110. Witten, who devised the extension of string theory known as M theory, is widely regarded by his peers as the most brilliant living physicist.

Hirsch suggests that after 20 years in research, an h of 20 is a sign of success, and one of 40 indicates “outstanding scientists likely to be found only at the major research laboratories”. An h of about 12 should be good enough to secure university tenure, he says, and fellowship of the APS, for example, should occur typically for an h of 15–20, and of the NAS for an h of about 45. In 2005, new NAS members in physics and astronomy had an average h of 44.

Just deserts

Different disciplines have different citation patterns, says Hirsch, so each field would need different thresholds. Biologists can have h values of up to 190. But with that proviso, the method should work across disciplines.

One of the index's main attractions is that it can also rescue from obscurity researchers who have made sustained and significant contributions but who have not won the reputation they deserve. Many solid-state physicists would applaud the contributions of Manuel Cardona (h of 86) at the Max Planck Institute for Solid State Research in Stuttgart, Germany. But few might have ranked him alongside Nobel laureates Philip Anderson (h of 91) and Pierre-Gilles de Gennes (h of 79; see Some of the highest-ranked physicists, by h-index).

But the research community may take some convincing to put its faith in numbers rather than judgement. Ed Hughes, manager of the UK Research Assessment Exercise, which assesses the quality of university science departments to determine their funding, says that the exercise purposely avoids metrics in favour of expert review panels. “We explicitly don't use impact factors and citation indices,” he says, explaining that 96% of researchers consulted after the 2001 assessment were in favour of using peer review.