Open peer review: a better way?

Using Github and transparent open review to improve the peer review process.

Academia
Author

William Becker

Published

September 14, 2022

If you have worked in academia for any length of time at all, you will know a bit about peer review, and will most likely have had some frustrating experiences along the way.

But for non-academics, what is peer review anyway? Well, first, it’s necessary to explain that much of an academic’s worth is measured by the number (and to some extent, the quality) of academic publications. A publication is basically a report of some research done by the academic, usually of around 6000-10000 words. The “manuscript” (yes for some reason we have to pretend we’re in Ancient Egypt) is submitted to an appropriate journal, it is reviewed by “peer review”, and then if it is accepted, it is published, and the academic and his/her colleagues get a juicy publication point and accompanying citations to beef up their reputation.

More or less, the process of publication looks like this:

  1. Researcher, usually with some colleagues, does some research.
  2. They write a manuscript describing the research and the findings.
  3. They find a suitable journal to submit it to. There are loads of journals that cover every type of research imaginable, so they pick the most suitable one that fits the topic and the level of the research.
  4. They submit the paper to the journal.
  5. The journal finds some reviewers, who are typically other academics working in the same field. They spend some time reading the research. They can each decide to recommend that either the work is rejected outright, or could be published after some kind of modifications which they suggest.
  6. If the paper is not rejected by the reviewers, the authors have to address the reviewers’ comments, or else give some good reasons why they are not making the proposed changes. The comments can be anything from small typos to fundamental questions about the work and suggestions for very time-consuming further research or changes in methodology.
  7. Steps 5 and 6 can repeat several times.
  8. If the reviewers and the editor are finally happy with the paper, it is recommended for publication and eventually published.

This is a simplified version of the process, and the review part typically takes some months, but may take years. Importantly, the large majority of reviews are done confidentially, so only the editor, the reviewers and the author, see what happened. Often the reviews are blind (the reviewers see the names of the authors but not the other way around), or double-blind (both authors and reviewers don’t know the names of each other, everything is anonymised). I’ll return to this point.

Problems

The central idea behind peer review is sensible: research should be checked and questioned by experts. Only when the peer community is satisfied the research has been performed with a sufficient degree of rigour can it be published. Therefore, published research should in theory always be of a minimum acceptable level of quality.

In practice there are many problems with this process. Far too many to go into much detail here: I could mention the conflicting incentives of the “publish or perish” metrics that researchers are judged on; predatory/fake journals that will publish anything for a fee; “citation circles” where groups of friendly researchers cite each other’s work simply to increase their citation counts; and the issue of “who you know” often outweighing “what you know”, among many other issues that come to mind. But let’s focus on some specific problems with the review process.

Let me preface this with a disclaimer: peer review is an essential part of the scientific process. Reviewers give up their time to make this happen, without payment, and should be recognised for this fact. I have reviewed many papers for around 20 different journals, and many academics do more than that, and do a great job of it. However, not all reviewers play by the rules, and this is a problem. So it is the minority that I will focus on here.

Hostile reviewers

Every academic has come up against hostile reviewers. The review process is not meant to be easy, but a surprising number of reviewers seem to nit-pick to an extent that seems to go beyond what is reasonable. They may insist on extensive but unnecessary modifications, or are constantly unsatisfied with revisions. In an ideal world the editor would spot these over-zealous reviewers, but editors are busy. Sometimes such reviews may be due to the fact that the reviewer is from a competing institution, and therefore there is little reason (apart from professionalism and honesty) for them to play fair. This problem is enabled by the fact that the review process is done behind closed doors, so there are virtually no consequences if reviewers don’t behave themselves.

Blind and double-blind reviews should help vindictive attacks between rivals in theory, but in practice many research fields are small enough that it is often fairly obvious, even without names, to know who wrote the paper (if you are the reviewer) and to know who the reviewers are (if you are the author).

Friendly reviewers

At the other end of the spectrum are the reviewers who are your mates. To save editor time (and ostensibly to find suitable reviewers), in many journals authors are allowed to suggest reviewers for their paper. In theory, reviewers are not supposed to have any close connection with the authors, but in practice, editors have to deal with many papers and reviewers are scarce. So often, the authors’ recommended reviewers end up reviewing the paper.

Clearly the problem is that if the authors are not playing cricket, so to speak, they can just recommend their friends as reviewers. Depending on the level of integrity, these reviewers may give an overly-soft review or even accept the paper outright with no modifications. The result is that the paper hasn’t really been through a real peer review. Again, this problem is facilitated by the fact that the review is not public.

Publication for citations

A fairly innocent-sounding comment is when a reviewer recommends that the author make some further citations to specific papers, a.k.a. “relevant literature”, possibly to “improve the context of the work”. The catch is that all of these citations happen to be the reviewer’s papers, or else those of their friends. This puts the authors in an awkward position: they can cite the proposed papers, which is easy to do, and that will appease the reviewer and bring them closer to publication. Alternatively, they could contest the recommended citations, but this will likely delay the process and could aggravate the reviewer. By far the easiest option is to just cite the papers - is it worth risking possibly months of delays? But it is irksome to have to “pay passage” to the reviewer in this way, and is obviously not ethical.

Lazy reviews

A proper review of a paper takes time, sometimes rather a long time. Apart from anything else, many research papers are complex and take time to understand, even for experts. You should carefully read the whole paper and try to spot any flaws in the methodology, mistakes in equations, and make sure it is written clearly and concisely. You should make as many suggestions as possible to improve the paper (without being nit-picky - see above), and be prepared to spend time communicating with the authors after successive revisions. Sometimes you have to check the cited work to make sure the citations actually support the statements made in the paper.

Since reviews are unpaid and uncredited, it is easy for them to fall low on the priority list of the reviewers’ tasks, especially in busy periods. This can sometimes lead to very lazy reviews, where a reviewer writes a short review to simply tick it off the list. In one case, a reviewer of my paper simply copied the abstract of the paper (presenting it as his/her review) and added one vague sentence saying that it wasn’t good enough. It was clear that the reviewer had not even read the paper and had probably spent five minutes compiling the “review”.

The issue is that of course it is far easier to reject a paper on vague grounds, rather than attempt to read it properly, understand it, and offer constructive comments.

The never-ending story

Reviewing takes time, as we have seen. But there should be reasonable limits. To describe a personal example, the research for one paper of mine was completed in 2014. We had a hard time finding a suitable home for the work because it was a fusion of two fields (sensitivity analysis and econometrics). At one point it was stuck in one journal for a whole year with no response from the editor. It was rejected from a few other journals because of the topic and other reasons. Finally we submitted it to another journal, where it took two and a half years to be reviewed. In total, it took us seven years to get the paper published.

I’m not claiming that the paper had a divine right to publication, and in some cases reviewers had made reasonable points for rejecting the paper, which we subsequently addressed. But if research takes years to review, that is a problem for the authors and for the wider research field because there is such a huge lag in making new work visible. It means that published literature doesn’t represent the state of the art, but rather the state of the art a couple of years ago, or more.

A better way? :sunny:

Ok that’s enough complaining for now. I could definitely write more, but I wanted to rather share a recent positive submission and review experience, which I think ought to be more widespread.

Over the last couple of years I have built and maintained the COINr package, which is an R package for building and analysing composite indicators. Since this package is now used quite widely, and I’m not immune to wanting a little credit for my work, I wanted to have a way that users can cite it if they choose.

The problem is that I wanted to avoid having to write a long paper, when the package is already well-documented at its website and in the many vignettes that accompany it.

I was very pleased then, to discover the Journal of Open Source Software (JOSS). JOSS describes itself as “a developer friendly, open access journal for research software packages”. It was created to give a means for open-source software developers to write short papers that allow users to cite their work. The logic is that often, research software is well-documented, and enables others to perform research. So writing a full-length paper about the software is often unnecessary, and is done mainly to satisfy journal requirements. All this is described very well in a blog post by the founder of JOSS.

Apart from the fact that JOSS is great for software developers, it has (in my opinion) a great review process. If you read back through my list of complaints in the previous section, it is evident (to me) that all of them are facilitated, or aggravated, by the fact that the review process happens behind closed doors. Reviewers, and authors, can misbehave with very little consequences unless the editor is unusually vigilant.

JOSS instead employs a fully transparent review process via GitHub. You do need to host your software on GitHub, and have an account there. Then, the review process looks like this:

  1. You write a short paper describing your software package in Markdown. This makes formatting very easy. Here’s my paper.
  2. You commit and push your paper to a folder in your GitHub repo. JOSS then has a nifty “GitHub action” which compiles the paper into a pdf (remotely, on GitHub) and takes care of all the formatting for you. Here you can find my compiled pdf.
  3. An editor is assigned to your paper. They open an issue on the JOSS repo which is the main reference for your paper’s review. You can see the COINr review here.
  4. A lot of initial checks are automated using the “editorial bot”, which checks things like word count and references, this alone saves a chunk of editor/reviewer time.
  5. Reviewers then provide their reviews by opening issues on the repo that hosts the software under review. This means that the authors are instantly aware of the points raised by the reviewers, and can check them off and refer to commits that address the specific issues raised. See e.g. this small issue which was flagged by a reviewer and then resolved with a commit.
  6. When the review process is concluded, if the reviewers and editor are happy, the paper is published online as a pdf and receives a DOI which allows the software to be cited.

There are at least two things that are great about this process, in my opinion. First of all, everything is public. This means that much of the skullduggery that can occur in the peer review process doesn’t happen, because everything is visible to anyone. Both authors and reviewers are accountable and incentivised to behave ethically. Reviewers can even showcase their reviews as part of their portfolio to demonstrate their competences.

The second thing is the use of GitHub. Every issue registered by the reviewers is tracked, and modifications to the paper and to the software can be referenced by specific commits. This keeps a perfect record of how each problem was resolved, or not. Comments between authors and reviewers are received in real time, and the authors can address each issue one at a time, rather than having to compile a single giant response. Using GitHub also does away with the huge and clumsy editorial manager systems used by many other journals, and keeps costs low.

So

I actually stopped writing research papers a couple of years ago because I moved out of an academic position, and at the same time I became a bit disillusioned with academia in general (that’s a story for another day!). However, I did want one last paper to pick up some citations for the COINr package. I approached this with some trepidation, all too familiar with the often-painful experience of trying to get a paper published.

I have found JOSS to be a really refreshing experience. Worth saying that my paper is still under review and, although I think it should make it, I of course cannot guarantee it will be published. But what is important is the process - the transparency and efficiency has, for me, removed a lot of the problems I have experienced with peer-review. Perhaps the most important thing is that transparency encourages fairness, and I think that is what most of us are looking for in peer review.

I would personally love to see the JOSS model used more widely. It is of course not the only journal to use open peer review, but (at least in the fields I have worked in) most journals still use blind or double-blind models, which tend to come with a lot of the problems mentioned. It is certainly the first journal I have submitted to that uses GitHub.

But how generally-applicable is the JOSS process? While the use of Git and GitHub might seem intimidating to some, in reality it is just another tool. Most people have struggled with Word at one point or another, and LaTeX, while brilliant, is often clunky and confusing. Writing in Markdown is actually easier in many respects because you don’t have to worry about formatting. The basics of Git can be learned easily in a day, and are hugely useful in terms of collaboration and tracking. Moreover, everything is free and open-source, which makes it far more inclusive, particularly to authors in less-wealthy countries. So in practice, the use of GitHub, or similar, needn’t be an obstacle and comes with a number of attractive benefits.

That’s it for now. I must go off and work off some of the issues on my paper!