Credit: Adapted from Getty
More than half a century ago, information scientist Derek De Solla Price observed that the scientific literature was growing exponentially, doubling every 10–15 years1. Little did he know that this observation would spawn not just a field of study, but an entire industry devoted to counting, measuring, weighing and indexing every aspect of academic output2. Today, the modern academic doesn’t just publish — they generate metrics. They don’t just research — they optimize their h-indices. And they certainly don’t just think up new ideas — they maximize citation counts while maintaining a favourable paper output.
Ultimately, they produce ‘units of assessment’, according to the UK Research Assessment Framework (REF), which the United Kingdom uses to allocate £2 billion (US$2.7 billion) in public research funding across its universities. They don’t collaborate; they strategically co-author, to maximize citation networks. And they certainly don’t read any more — who has time when there are metrics to massage?
The rise of the h-index: when counting became culture
It began innocently enough. Twenty years ago, physicist Jorge Hirsch at the University of California, San Diego, proposed the h-index. It was elegantly simple: a scholar with an h-index of n has published n papers that have each been cited at least n times3. Hirsch’s original paper suggested that “for physics, a value for h of about 10–12 might be a useful guideline for tenure decisions at major research universities”.
Oh, sweet summer child, to quote a popular line from Game of Thrones.
What Hirsch had unleashed was not merely a metric, but a compulsion. Within a decade, the h-index had metastasized across disciplines. And very quickly, rather than a metric, it became a target. Search committees began sorting candidates by h-index; promotion committees set h-index thresholds; graduate students started checking their h-indices with the frequency and anxiety typically reserved for checking a dating app.
An explosion of metrics
But why stop at one number when you could have dozens? The h-index begat the g-index (which gives more weight to highly cited articles)4, which begat the e-index (to account for citations beyond those considered in the h-index), the a-index (to measure the average number of citations in a researcher’s top papers) and the m-index (which takes career length into account). There is now the i10-index (the number of publications with at least ten citations), the h-core (the core set of papers considered by the h-index) and the contemporary h-index (which adds a decay function, because apparently citations should have a half-life).
Each new metric arrived with its own justification, its own formula and its own promise to finally capture the true essence of scholarly impact.
To me, metrics are like trying to measure the ocean by counting waves, each new metric promising to quantify them more accurately than the last. The REF, for example, gave us ‘impact case studies’ — documents that describe research impact beyond academia. Australia’s Excellence in Research for Australia (ERA) introduced ‘research quality indicators’ to assess research quality across disciplines. The Leiden Manifesto5 actually had to remind us that “quantitative evaluation should support qualitative, expert assessment” — a bit like reminding people that food should be chewed before swallowing. Naturally, these metrics are justified as ensuring accountability to taxpayers, who surely lie awake at night wondering about h-indices rather than, say, whether scientists have cured cancer or explained consciousness.
Meanwhile, the publishers watched and learnt. We now have a panopticon of productivity, in which every citation is counted, every self-citation scrutinized and every collaboration strategically calculated for maximum metric optimization.
Enter the j-index: the weight of true scholarship
I humbly propose a new, maybe-slightly-ironic, metric. This captures what truly matters in academic work: physical heft. I call it the j-index, and its calculation is refreshingly straightforward:
J = W ÷ Y
Where W is the total weight (in kilograms) of all books a scientist has authored, and Y is the number of years since the author earned their doctorate.
Consider the elegant simplicity. No longer must we worry about citation cartels (in which authors cite each others’ publications to boost citation counts) or gaming the system through self-citation. The j-index is immune to such manipulation — unless the books are printed on heavy paper stock, which should be discouraged through strict guidelines on acceptable paper weights. On the plus side, no longer will scholars insist on softcover editions: just think how many trees we can save!
