Against Peer Review

Peer review is often, incorrectly, used as a gold standard by which the legitimacy of scientific studies are measure, but peer review itself provides minimal value towards establishing the veracity of a study’s findings.

The essence of science is systematically and empirically testing hypotheses of observable, repeateable, natural phenomena.

The legitimacy of any particular scientific finding is whether the application of the methods used in the study will result in similar findings upon repetition. If a study can be replicated, the study’s findings are verified.

Peer review does not replicate studies, so it does not speak to the veracity of scientific findings; it does not effect the legitimacy of a study’s findings.

Peer review does have it uses, as a review of methodology and a post-study review of the analysis of the findings, but not as a verification of the study itself. Peer review can be useful for ascertaining whether the methodology of a study actually is measuring what it purports to measure and whether the analysis of the findings are legitimate, but the value of both of these assumes the findings are verified.

You will then notice that peer review in practice occurs at improper times for both of these. For a methodological review, methodology should be fully set in place before the experiment is carried out, as any change in methodology will effect the findings and outputs. Methodological peer review should occur before an experiment begins. Analysis should occur after an experiment is verified, as analyzing false findings just leads to false analyses.  Because of this analytical peer review should only occur after findings have already been verified.

Yet peer review as practiced occurs after an experiment has been carried out, but before an experiment is verified, the worst of both worlds. It is useless for critiqing methodology, short of rejecting the study completely, as post-experiment revisions to methodology are anti-scientific. It is also useless for critiquing analyses because it is unknown if the findings being analyzed are of any actual value. It accomplishes nothing besides providing a false sense of legitimacy.

On top of this, peer review acts as a filter for what is novel, important, and, most importantly, relevant. This filter is anti-scientific. A study finding nothing useful, may not be as practically relevant as a a study finding something novel, important, and relevant, but it is as methodologically relevant. You can not filter out the studies finding “nothing” and filter in only studies finding “something”, and expect to have an accurate view of the world.

For illustration, if 10 studies are done on the same aspect of Topic X, and 9 find nothing novel, important, or relevant about that aspect of X, and 1 study finds something very novel, important, and relevant, that 1 study may pass review and be published, while the other 9 studies will not, providing a very biased view of Topic X.

As bad, this incentivizes scientists to do studies that are novel, important, and relevant, instead of studies that aren’t. Publish or perish. This creates some very obvious perverse incentives and pre-filtering effects scientific studies.

Peer review is likely, in practice, a negative on science as it creates a false sense of legitimacy which does not exist due to the lack of replication and verification. By being a gold standard, peer review gives legitimacy to a study which it has not earned. Implying a study meets the standard of being scientific, when none of its findings have actually been scientifically verified, is absurd, especially when peer review is the norm, but replication isn’t.

The entire process of scientific peer review as it currently stands should be torn down and replaced by mandatory pre-experiment methodological review and mandatory post-experiment replication. Post-replication analytical review would be good, but not necessary, for proper science. Any study that completes pre-experiment methodological review should be published, at least in summary, even if it provides nothing novel, important, or relevant, as a lack of results is just as methodologically important as “real” results.

Of course, this would be more time consuming and expensive, but this is price of doing real science, rather than pretending to do science.

Post-script: Peer review for the humanities is fine, as is, as these fields are unscientific in the first place. In an unscientific field, paper, or study, a post-study review by experts is probably useful.

5 comments

  1. Good article. Don’t forget though how political bias can lead to great abuse of the peer review system. Even novel and new things will be rejected if they find, for example, some bad thing about blacks or some good thing about white males. Or obviously nonsensical and poorly supported ideas like stereotype threat get through easily.

    http://atavisionary.com/stereotype-threat-and-and-pseudo-scientists/

    By now I am sure most academics wouldn’t even bother trying to study a question which likely will give undesired results in this area because they know it will get rejected at best or destroy their reputation at worst.

    A recent study recently found 80% of top universities have zero republicans. Keeping in mind “republicans” are just leftists from the past in a lot of cases, but are still farther to the right than the liberals in these places, it really tells how little trust should be forwarded to the institution.

    http://archive.is/6lt3W

  2. As a current scientist in a hard field (fluid dynamics): peer review is the most horrible thing we’ve ever done to science. It’s anti-science, in fact: it actively suppresses the advancement of human knowledge.

    Einstein, Schroedinger, Bohr, Dirac, Lavoisier, Pasteur, Gauss, Euler – just a few of the names of famous scientists whose work would never have passed peer review.

    An active anti-scientific eugenics and genocide program would be less injurious to modern science than the peer review paradigm. I cannot condemn it in terms that are strong enough.

  3. Peer review is one of those things like Social Security. Most of the population doesn’t know how they work, they think it works an entirely different way from the way that it actually works, and that way that they think it works would be greatly superior to the actual real thing.
    On Social Security, if you ask an average person how it works, he’ll tell you that he pays in every paycheck and the government invests that in an account with his name on it for his retirement. If you press him with, invests, how? He might tell you Treasury bills or maybe an S&P500 index fund. He’s wrong but his idea is far better and less unsustainable.
    On peer review, he’ll tell you, well, the peer reviewers hook up his experiment and see if they get the same results/same numbers. This too is far from the truth—the biggest thing in peer review in my experience (been there, done that on both sides in engineering) is to make sure that all of the relevant research is cited, because, after all, profs get ‘graded’ based on how often their papers are cited by other profs. There’s no way to cure the problem with the current incentive structure.

Leave a Reply