[HT: Scholarly Kitchen] A new paper published in Science (of course, paywalled) examines publications in biological sciences from 2006 and 2008 to see how many were accepted at first instance and how many were initially rejected (or rejected at least once prior to acceptance). 75% were accepted first time around while many otherwise went through the system quickly. Not surprisingly, rejected papers went subsequently to journals with lower impact factors.

More significantly, the papers that were rejected ended up with higher rates of citations than their accepted counterparts. Now this isn’t a true treatment but it is interesting. One theory to explain it would be that rejected papers that are subsequently accepted next time around were somehow less conventional than the average paper accepted immediately. (This would be something that economists might agree with). Less conventional but publishable papers might themselves be more likely to be impactful. While this is a hint at some inefficiency in the system overall, for biology, the system appears to work as expected.

The other theory is that causation runs the other way and rejection actually improves paper quality. But this outcome actually has a more radical implication: that we need to think carefully about the efficiency of the reviewing process. Is it that having another set of referees looking at the paper improves their quality or is it the suggestions and revisions that come from rejection? If the former, then when rejection taxes another set of referees’ attention that is part of the system. In the latter, we would want referees to migrate across journals (as the American Economic Association currently does).

Finally, sometimes the reviewing process fails in the other way and accepts papers that shouldn’t be accepted. This appears to have happened with a mathematics paper that was pure nonsense but with nice formatting. Lets hope that was a once off and that this is not a new way of getting papers published. Of course, it would be even more disturbing if such papers had high citations too!

Send to Kindle

2 Responses to Tracing patterns of academic rejection

  1. Grumble. Isn’t it problematic to start from a sample of *accepted* papers to do this kind of analysis? Agreed, doing the study in a prospective fashion is much harder. Still, this is a pretty basic design flaw. Am I missing something?

  2. @mdryall says:

    Pierre, seems like we already have a pretty good estimate of how many citations unpublished papers receive.

Leave a Reply

%d bloggers like this: