Is peer review running out of steam?

Topics for my editorials tend to pop up as I am getting close to the deadline to publish the next issue of EJHP.

The topic of peer review and its associated issues has exercised me again as I will explain later. I first wrote an editorial on the topic in 2014 titled ‘In peer review we trust—or do we?’1

There are plenty of definitions of the term ‘peer review’. One such is as follows: ‘Peer review is a fundamental part of scientific publishing whereby individuals with relevant expertise who were not involved in a particular piece of research critically appraise others’ work to ensure that it meets standards for ethical conduct, quality, and rigour.’ 2

There is a substantial literature on the topic of (editorial) peer review as well as regular appeals from journal editors seeking to recruit more reviewers.3 4

Taking an evidence perspective, there are a few systematic reviews either attempting to assess the effects of peer review or to evaluate interventions aimed at improving the quality of review. In a systematic review by Jefferson et al 5, 19 studies were included of which 12 were randomised controlled trials (RCTs). There is some suggestion that concealing reviewer or author identities had some positive effect on reviews, that a statistical checklist can be helpful, but there was no evidence that training referees improved performance. Two studies showed that editorial processes improved readability and the quality of reporting. The authors concluded that: ‘Editorial peer review…is largely untested and its effect uncertain.’

A later review from Bruce et al 6 in 2016 identified 22 reports of RCTs. Most had been published before 2004. The authors found that neither training nor the use of a checklist improved the quality of the final manuscript. That said, both a statistical review and also open peer review did improve the quality of the final report.

While the weight of opinion is that peer review is a necessary and valuable stage in assessing potential articles for publication, that opinion does not have a robust evidence base. I remain convinced for now that peer review improves the reporting of science but will never identify every problem or error. In addition we ask peer reviewers to focus on the science, applicability and clarity of a paper and not to copy edit papers or to identify plagiarism (we do that automatically for all accepted papers). The process should help authors see their work through the eyes of another expert and be constructive, even when the final decision is to reject a paper.

The editorial team are extremely grateful for the many who give time to help improve the quality of publications in EJHP.

So what has been exercising me? Peer review is all well and good but what if we cannot find reviewers? This is now a common problem and I am now having to reject papers based not on scientific merit but on an inability to find reviewers for a particular submission. We have to draw the line somewhere by the time some 10–15 invitations to review are declined—there is no option but to reject. In some cases it may be that the article in question has limited appeal to a hospital pharmacy audience. In others I suspect that we may be missing out on a good paper and alienating authors in the process.

However, I do think we have created a problem with the common article submission software platforms, where a simple click is all that is needed to decline a request to act as a peer reviewer. So if you are invited, please think a little longer before hitting the decline button.

There is a developing debate7 on post-publication peer review, and while this accelerates the time to publish it may well lead to the publication of suspect data or poorly designed studies. I am not sure we are ready for that just yet.

留言 (0)

沒有登入
gif