Will reputation metrics open scientific publication?

urlThat is the contention of Richard Price, the founder of Academia.edu.

Aaron Swartz was determined to free up access to academic articles. He perceived an injustice in which scientific research lies behind expensive paywalls despite being funded by the taxpayer. The taxpayer ends up paying twice for the same research: once to fund it and a second time to read it.

The heart of the problem lies in the reputation system, which encourages scientists to put their work behind paywalls. The way out of this mess is to build new reputation metrics.

As usual, there are lots of issues here: the right of taxpayers, the market power of journals, etc. But Price actually focusses on the incentives of scientists. When there are now options for scientists to disseminate their research freely, why do they submit them exclusively to journals who then put up paywalls?

One answer is that this is the set of incentives handed down by Universities and other scientists in evaluating a scientist’s performance. And it is true, in performance measurement, time and time again, where an article is published matters and so it is hard for a scientist to consider publishing elsewhere. Moreover, this applies to scientists with a high reputation because publishing good work in a journal bolsters the journal and the prominence of their past publications. It is a self-reinforcing cycle that makes change difficult. In that regard, it is a gift to incumbent journals who don’t just charge users because there are costs associated with publishing the next issue but instead can charge users based on their willingness to pay for the back-catalogue of all previous issues. Even if scientists were to stop publishing in these journals tomorrow, that latter paywall would remain for a long period of time.

Nonetheless, Price’s contention is that if the scientists stop backing paywalled journals, the ball of change will start rolling. And his solution is to provide an alternative way of measuring performance: citation metrics. Broadly speaking, this is a way of evaluating the quality of an article after the fact rather than before the fact as peer reviewed journals do. And citation metrics are valuable. Create them and they become integrated into scientist performance evaluation. Because that evaluation is so hard and, in particular, it is hard when comparing scientists, quantitative measures are attractive. So they have emerged alongside journals to complement them but they can substitute for them too. Have a highly cited article in a lower tier journal and that can help.

But citation metrics also reinforce journal power. Which journals perform best on citation metrics? The top-tier journals of course. The very journals that citation metrics are supposedly going to disrupt, use the metrics to argue that scientists should publish with them. Now, of course, this is a classic causation versus correlation thing but it is entirely possible that the causation runs in the direction the journals would like: that publishing in a better journal makes it more likely you will gain more citations. If this is the case, citation metrics will bolster and not drive scientist’s away from top-tier journals or journals at all.

The reason for this causal theory is attention. Journals have always arisen because of a combination of information overload and competition for attention. Information overload arises because there are lots more scientific knowledge out there than people and scientists can possibly consume. Thus, in working out what to pay attention to, signals of quality are important. Publish a working paper and you are competing in the throng. Publish in a journal and users know that some evaluation has occurred. Now, to be sure, if a working paper is cited highly or discussed widely that is a great signal (and a working paper that is easier to digest and read than a journal article even more so). But you are one step behind compared to a paper that has already had peer review from a journal. Now it is true that we can separate peer review and quality certification from what is traditionally a journal — as the mathematicians have recently begun to do — but as we economists know, it takes time to establish a reputation. Right now, I suspect citation metrics are helping incumbent journal reputation as much as alternatives and will do so if just a fraction of good scientists are mildly biased towards still publishing in journals.

One thing that will open up paywalls for users is if authors (or their funders/institutions) pay for publication. As Michael Eisen eloquently argued, Universities have a big role to play here. However, this too will only be a driver if open publications drive greater performance in citation metrics than their closed publication peers. There is a trickle of evidence that this is the case but not yet enough to really make a case (and it is really complicated when IP protection is also involved). Academia.edu and other citation metrics start-ups should focus on how they might provide that evidence if they want to present a case as Price has done. After all, even theories of the behaviour of scientists need evidence-based verification.

Leave a comment