Red flags in submissions to Optometry and Vision Science

After only a few months as Editor in Chief of Optometry and Vision Science (OVS), it may seem a little early to write an editorial on how to ensure that your paper gets a full review in OVS! However, those few months have seen over 300 submissions, and the papers being submitted are not dissimilar to those submitted to Ophthalmic and Physiological Optics in the early years when I was Editor in Chief. Many contain all too familiar errors, and currently, our acceptance rate is down to around 15%. Papers are triaged by me and the Editorial Board, and through that system, about 25% get through to a full review. Triaged decisions are typically made within 3 to 6 days of submission, so at least, the authors receive a quick decision and can revise their paper and submit elsewhere if they wish. The triage process also limits the work requested from reviewers who would otherwise be asked to review these weak submissions. This editorial is written for inexperienced researchers and PhD students and will describe the “red flags” that suggest a weak paper and possible early rejection, plus a couple of “green lights” (that direct you along the path to publication) at the end to finish on a positive note.

RED FLAG 1: A COVER LETTER STATING SUBMISSION TO A DIFFERENT JOURNAL

We are currently receiving some papers that have clearly been rejected by other journals. The most obvious indication of this is when the authors include a cover letter that states that the paper is being submitted to Ophthalmology, Vision Research, or Investigative Ophthalmology and Visual Science, among others. Therefore, the first piece of advice is to make sure that you put the correct name of the journal you are submitting to on the cover letter! We will shortly include a section in the submission process that asks if the paper has been rejected previously from another journal and what changes, if any, the authors have made after submission to OVS. Authors should not continue to submit a paper that keeps getting rejected by journal after journal. There must be something wrong with the study or they are continually submitting to the wrong journals.

RED FLAG 2: A PAPER IN AN INCORRECT FORMAT

Many authors do not seem to read the journal author guidelines. As you might imagine, an editor is less likely to look favorably on a paper that follows none of the requested formatting, and for OVS, that would mean submitting a paper with an unstructured abstract and with references cited using, for example, the Harvard system. This suggests that the paper has been rejected from another journal that uses a different format and recycled without any thought or effort.

RED FLAG 3: A LARGE ITHENTICATE PLAGIARISM SCORE

All papers submitted to OVS are run through the plagiarism software Crossref similarity check (iThenticate, Oakland, CA) and any papers with large overlap (20%+) are inspected to see whether this represents publication on a preprint website, from a PhD thesis, a few similar sentences scattered about the paper, or large chunks copied word for word from other people's work. For the latter submissions, OVS follows the guidance of the Committee on Publication Ethics (https://publicationethics.org/) and the recommendations of the International Committee of Medical Journal Editors (http://www.icmje.org). Some journals are particular about self-plagiarism, as they argue that it flouts copyright. This seems needlessly restrictive, and if an author has found the ideal way to succinctly describe a methodology, it seems unnecessary to require them to rewrite it.

RED FLAG 4: HIGHLY SURGICAL PAPERS

OVS is the research journal of the American Academy of Optometry. As such, we do not typically publish highly technical papers about ocular surgery and niche ophthalmology practice. Just because a journal is ranked within the Ophthalmology category of various citation, metric tables such as the Journal Impact Factor and Google Scholar do not provide sufficient indication of the type of papers it publishes. Most journals have an “About the Journal” section on its website. Ours currently states the following:

“​​​​Optometry and Vision Science is the most authoritative source for current developments in optometry, physiological optics, and vision science. This frequently cited monthly scientific journal has served primary eye care practitioners for more than 75 years, promoting vital interdisciplinary exchange among optometrists and vision scientists worldwide.” (https://journals.lww.com/optvissci/Pages/aboutthejournal.aspx)

RED FLAG 5: POOR ENGLISH MAKING THE REPORT UNINTELLIGIBLE

Researchers whose first language is not English are at a disadvantage given that 98% of science papers are written in English.1 If your first language is not English, you may need to ask colleagues to proofread your paper to improve the grammar and/or use a language editing service that many publishers provide. In a recent article by researchers in Colombia, 94% had asked for favors to edit their English, and 59% had paid for editing.1 Standard editing costs about a quarter of a PhD student's salary, and premium editing costs a half.1 Editors and reviewers will often endeavor to help with grammar when papers get to full review, but they need to be readable. It may be unfair on researchers whose first language is not English, but understandably, a huge red flag is a paper whose rationale and methodology cannot be understood because of poor English.

RED FLAG 6: POORLY WRITTEN TITLES AND ABSTRACTS

Given the ever-increasing number of submissions that all journals now get, editors will not always read the whole paper and may just read the title, abstract, and the methods section for triage purposes. It is therefore unfortunate that the title and abstract are not uncommonly more poorly written than the main manuscript. Sometimes they seem to be written last with insufficient thought and other times seem to be written early and do not reflect the rest of the paper or the title. Some titles seem to miss the point of the study entirely or are written for the expert in the area with no thought for the general reader. A few of the latter are crammed with acronyms. They are the parts most read by editors to determine whether to fully review, and if published, they are the most read part of your paper in databases such as PubMed and Google Scholar, so write them carefully.

RED FLAG 7: LACK OF DETAILED METHODS

The rationale behind a triage system concentrating on the methods section is that the introduction and discussion can be revised with input from editors and reviewers, but the study design has been set and the data have been collected using the methods described, so these cannot be changed. Indeed, the words of the statistician RA Fisher highlight the issue perfectly:

“To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of.”2

I can count on one hand the number of times I have asked authors to reduce the amount of detail in their methods section, but a huge number is asked to provide more. This includes full details of procedures that clinicians might think are not needed, such as refraction and visual acuity, which are very often poorly described.3,4

RED FLAG 8: NO HYPOTHESIS OR RESEARCH QUESTION

A red flag for a manuscript's introduction is a lack of a hypothesis or research question. Many poor papers do not include either, and a not uncommon study design is the collection of data with a particular instrument(s) and a particular sample of patients and a report of what was found. There should be more of a rationale for why you performed a study than “we had access to this group of patients and we had this new instrument.”

RED FLAG 9: INTRODUCTIONS THAT ARE FAR TOO LONG

Introductions that read like an MSc or PhD student's literature survey from their thesis, often are. Although the student may feel that all that detail is necessary, it rarely is, and students need to learn the skill of making an introduction succinct.

RED FLAG 10: USING SNELLEN CHARTS IN CLINICAL RESEARCH

Optometry researchers Ian Bailey and Jan Lovie-Kitchin introduced the logMAR chart design nearly 50 years ago in a forerunner of OVS, the American Journal of Optometry and Physiological Optics in 1976.5 They have been shown to be at least twice as repeatable and three times more sensitive to interocular differences than Snellen charts and have become the gold standard for clinical research.4–6 They are also widely available nowadays, including on computer-based systems, and clinical research studies should not be using Snellen charts4–6 other than perhaps screening participants for inclusion in a study. A practice that is particularly annoying is the use of Snellen charts to measure visual acuity followed by presentation of the data in logMAR, which can make the reader think that the data are more reliable than they really are.4

RED FLAG 11: BEST-CORRECTED VISUAL ACUITY

The use of the descriptor best-corrected visual acuity (VA) is unfortunately becoming increasingly common.4 It was originally used by ophthalmology researchers to describe visual acuity with the subjective refraction after refractive surgery but has become a catch-all with many authors using the term without any further description. It is now used to describe a plethora of different measurements, including habitual (or presenting) VA with spectacles, VA with an autorefractor result, or VA after an unspecified but often termed “standard” subjective refraction.4 When investigated, the standard refraction can be as little as an overrefraction with the patient's own spectacles with +0.50 and − 0.50 DS.4 Details are needed: Are the VA data monocular or binocular? Is it with patient's own glasses, autorefractor result, or subjective refraction? What was included in the subjective refraction? Was astigmatism assessed? What was the chart type, luminance, and working distance? What scoring rule was used? What termination rule was used? What was the number of clinicians and their training?4

RED FLAG 12: USING DATA FROM BOTH EYES

Many papers are rejected because of poor statistical analyses, and the most common error is analyzing data from both eyes of participants without correction for their dependency, to double the sample size.7

RED FLAG 13: SPURIOUS PRECISION OF DATA

Data are also often poorly reported.8 Spurious precision adds no value and can detract from readability and credibility: mean values should be no more than one decimal place over original data. For example, mean visual acuities should not be reported as 0.0567217 logMAR but 0.06 or 0.057 logMAR. Similarly, do not provide percentages to many decimal places when the sample size is less than 100, and provide the actual numbers: do not report 50.588%, but describe the data as 51% (43/85).8

RED FLAG 14: MAJOR FOCUS ON A P VALUE JUST <0.05

Reports that focus hugely on the p value and wax lyrical when p=0.048 and thus p<0.05 can be a red flag. p=0.048 is very similar to p=0.052 and does not necessarily need a very different interpretation. Consider the sample size (if it is large, small p values would be expected and could be meaningless)9 and the effect size10 and whether it is clinically significant as well as statistically significant.

RED FLAG 15: NO STUDY LIMITATIONS SECTION

Inexperienced authors can think that admitting that their study has limitations will mean that it will not get published.11 This is incorrect. All studies are limited in some way or another, and acknowledging the limitations of your study is very helpful to the reader. They add strength to a paper and are not a weakness.11

RED FLAG 16: OVERPLAYING THE IMPORTANCE OF THE RESULTS

Some inexperienced researchers think that a paper must show an important finding or it would not get published, so that they often overplay the importance of the results. Although we would all like to think that our studies and reports make a big difference to clinicians and their patients, steps forward tend to be incremental. There is therefore no need to overplay your results, and doing so is just a red flag.

RED FLAG 17: POOR REFERENCES (TOO MANY NON-PEER-REVIEWED ARTICLES, TEXTBOOKS, WEBSITES, SELF-CITATIONS, AND OLD PAPERS)

Reference peer-reviewed papers from quality journals wherever possible, and avoid using non-peer-reviewed references from professional magazines (e.g., Review of Optometry, Optometry Today), textbooks, conference presentations (especially if old: Why haven't the data been published in a peer-reviewed journal?), and websites if possible. Do not have the majority of the references as self-citations (why is nobody else publishing in the area?); references should include several recently published papers. If virtually all the references are from over 10 years ago, it suggests a dead field or that you have not reviewed the literature properly.

RED FLAG 18: INCONSISTENT REFERENCE FORMATS

Reference format should follow the journal guidelines, and the same format must be used throughout: be pedantic about this, because many editors and reviewers are pedantic about it, and sloppiness in referencing can suggest sloppiness in the research. Note that endnote and other reference managers do not always format things as you want them. Indeed, in my experience, they rarely do, so proofread and manually edit the references.

RED FLAG 19: LACK OF PROOF READING

Sentences that make no sense, silly misspellings, and poor grammar from English-speaking authors do not inspire confidence and can suggest that the research was as sloppily done as the report writing. Proofreading is a must: do not leave it to somebody else.

RED FLAG 20: INAPPROPRIATE SUGGESTED REVIEWERS

The submission process asks if you would like to provide suggestions for reviewers. Editors always appreciate some help in choosing suitable reviewers, so providing one or two suggestions is recommended. However, make sure that they are not recent (or current) collaborators or colleagues from your own department or clinicians that the editor will never have heard of and who have little or no research output when checked on PubMed or Google Scholar.

GREEN LIGHTS

“Green lights” are positive aspects of reports that will lead to your papers being fully reviewed and later published. A green light can therefore be the opposite of the red flags described above: for example, a methods section where the experimental design, participants, and procedures followed are provided in sufficient detail that others could duplicate the research and a limitations section in the discussion. In addition to avoiding red flags, here are a couple of other green lights that will further enhance your submissions.

GREEN LIGHT 1: HIGHLIGHT YOUR STUDY'S NOVELTY OR IMPORTANCE

If there is anything novel, important, or unique about your study, make sure that you let the editors and reviewers (and subsequently readers) know about it (although be careful of Red Flag 16!)

GREEN LIGHT 2: SUPPLEMENTARY INFORMATION, VIDEOS, AND DATA REPOSITORIES

Increasingly, supplementary information is being provided online and can be extremely useful: questionnaires, surveys, and video clips of experiments, among others. Journals and readers very much appreciate these additional pieces of information.

REFERENCES 1. Ramírez-Castañeda V. Disadvantages in preparing and publishing scientific papers caused by the dominance of the English language in science: The case of Colombian researchers in biological sciences. PloS One 2020;15:e0238372. 2. Bunce C. Avoiding experimental death: The EQUATOR network (a valuable resource for research). Ophthalmic Physiol Opt 2017;37:367–9. 3. Elliott DB. What is the appropriate gold standard test for refractive error? Ophthalmic Physiol Opt 2017;37:115–7. 4. Elliott DB. The good (logMAR), the bad (Snellen) and the ugly (BCVA, number of letters read) of visual acuity measurement. Ophthalmic Physiol Opt 2016;36:355–8. 5. Bailey IL, Lovie-Kitchin JE. Visual acuity testing. From the laboratory to the clinic. Vision Res 2013;90:2–9. 6. Lovie-Kitchin JE. Is it time to confine Snellen charts to the annals of history? Ophthalmic Physiol Opt 2015;35:631–6. 7. Armstrong RA. Statistical guidelines for the analysis of data obtained from one or both eyes. Ophthalmic Physiol Opt 2013;33:7–14. 8. Altman DG, Gore SM, Gardner MJ, et al. Statistical guidelines for contributors to medical journals. Br Med J (Clin Res Ed) 1983;28(6376):1489–93. 9. Armstrong RA. Is there a large sample size problem? Ophthalmic Physiol Opt 2019;39:129–30. 10. Sullivan GM, Feinn R. Using effect size—or why the P value is not enough. J Grad Med Educ 2012;4:279–82. 11. Price JH, Murnan J. Research limitations and the necessity of reporting them. Am J Health Ed 2004;35:66–7.

留言 (0)

沒有登入
gif