Anyone who has submitted an article to AHSE in the past decade has encountered four key questions as part of the submission process:

  1. (1)

    Has an earlier version of this manuscript, or a similar manuscript been rejected by Advances in Health Sciences Education?

  2. (2)

    Is another version of this manuscript or a similar manuscript published, in press, under review or submitted to another journal?

  3. (3)

    Has another version of this manuscript or another manuscript based on the same or similar questions/subjects/data/analyses/concepts been rejected by another journal?

  4. (4)

    Have any other articles based on the same or similar questions/subjects data/analyses/concepts been published or accepted for publication elsewhere?

These questions were an attempt, perhaps awkward, to get at three related “bad behaviours” that cause editors endless hours of lost sleep. These are:

  • Plagiarism

  • Auto-Plagiarism

  • Salami-Slicing

We’ll expand on these terms momentarily.

The third question is somewhat different however, and is related to an entirely different issue; a “fast track” review process that we plan to implement at AHSE. This is the “good news” in the title. We’ll return to it in due course. For the moment, let us look at the “bad news” part—issues of plagiarism, “auto-plagiarism” and “salami-slicing”.

First some definition and description.

What about the three evils? These transgressions are not hypothetical. We have been very vigilant since August in screening submissions for evidence of the three. We have uncovered 23 submissions with various problems. Twelve had evidence of plagiarism; the remainder were either auto-plagiarism or salami-slicing. I hate to think how many got by the screen in the previous 21.5 years.

What do we mean by these terms?

Plagiarism appears straightforward. As defined by OED, it is “The action or practice of taking someone else’s work, idea, etc., and passing it off as one’s own; literary theft.” In academe, this is a very serious offense and can lead to dismissal.

Detecting plagiarism also seems easy—so easy that one could get a computer to do it. Most academics have heard of the program Turnitin.com for undergraduate submissions. Editorial Manager uses a similar program called iThenticate to screen all manuscripts.

But it’s not quite as simple as it appears. iThenticate, even though it is clever enough to not count text in quotes and comprehensive enough to screen non-conventional sources like websites, never comes back with zero overlap. And the amount of overlap matters. The guidelines of the Committee on Publication Ethics (Wagar 2016) distinguishes 2 levels:

Clear plagiarism (unattributed use of large portions of text and/or data, presented as if they were by the plagiarist

Minor copying of short phrases only (e.g. in discussion of research paper from non-native language speaker). No misattribution of data

What is to be done about it? According to COPE, clear plagiarism should invoke heavy artillery—informing superiors etc., whereas minor plagiarism should be handled by simply informing author of expected behavior. The trouble is, as we indicated, that most instances detected by iThenticate fall between “Clear” and “Minor,” so the poor editor is left with making a very critical judgment call with little guidance.

Auto-plagiarism, Redundancy and “Salami-slicing” are three related concepts. All involve an author using his own prior work by submitting a second paper that has followed up on a previous work they published elsewhere, telling a similar story and using similar methods. Redundancy is a term used by COPE, and again has levels:

Major overlap/redundancy (i.e. based on same data with identical or very similar findings and/or evidence that authors have sought to hide redundancy e.g. by changing title or author order or not citing previous papers)

Minor overlap with some element of redundancy or legitimate overlap (e.g. methods) or re-analysis (e.g. sub-group/extended follow-up/discussion aimed at different audience)

But this conflates two issues that are distinguished in the terminology I have used—auto-plagiarism and salami-slicing. (It also conflates it with intent to deceive—stay tuned). Auto-plagiarism amounts to repetition of segments of text from a previous published paper of the author’s own work. Again, detection of this can be automated. While major overlap—essentially republishing an entire manuscript in multiple journals does occur (Sackett and Rosenberg 1995; Rosenberg and Sackett 1995), these episodes are rare. Conversely if the submission is reporting a study from a program of research, it is almost unavoidable that phrases or sentences from a previous manuscript may creep in. In an ideal world, authors should obsessively check for this and rewrite offending text, but if it does arise, it is difficult to believe that it deserves any more than a very gentle wrist tap. On the other hand, copying large sections—sentences or paragraphs—from other papers amounts to copyright infringement and merits more serious consequences.

Salami-slicing is much trickier to detect. This amounts to submitting two papers based on the same data, methods, subjects etc. But what constitutes overlap? And how much is tolerable? There is no computer program for screening for it, and a claim of salami-slicing most necessarily involve a judgment call.

Fortunately, Eva (2017) recently wrote a paper defining and describing what he means by it; a description which I wholeheartedly endorse. He says:

When I say ‘salami slicing’, I mean the act of dividing a single story (usually derived from a single study) into multiple papers.

The use of the term “story” bears elaboration. Eva goes on to state:

This definition prioritises the division of a ‘story’ rather than defining salami slicing through the division of a ‘research project’ because what constitutes a project is so variable in our diverse field that one set of data can tell multiple stories and an interpretable story may require multiple studies.

But there is one easy litmus test, which comes back to questions 2 and 4 at the beginning of the editorial and the issue of intent. These are asking you to disclose whether similar works are published elsewhere and describe how this one differs. You should also describe and reference previous work in the text of the article and indicate the unique contribution of the present work described. Indeed, if the submission is part of a research program, this is simply a natural way to set the stage. And it serves a second purpose, in terms of clarifying the ground rules for both author and editor. Of course, an author may decide to not disclose prior related publications, but this can be easily determined using PubMed or Google Scholar at the author’s peril.

As Eva (2017) states:

…the best advice anyone can offer is to be transparent. Alert editors to articles, published or otherwise, that might be construed as a slice of salami so that you make your intentions clear and enable a genuine discussion to take place.

This does not mean that differences will not arise. As I have indicated, no amount of definition tweaking and no increase in sophistication of computer screening will remove the role of judgment in the process. And sometimes judgments will differ.

The good news

Now to the good news. As Eva indicates in his article, one reason why salami-slicing is viewed as unacceptable is because, among other things, it wastes reviewers’ time. One could argue that the whole peer review process costs a huge amount of undocumented resources related to the time of reviewers, editors, etc. One wonders if academic publishing would be economically viable if it publishers  were required to pay reviewers.

Moreover, it is a wasteful process. It’s not easy to get published in medical education. AHSE acceptance rate is 13% or so; other journals are no better, and many are even tougher. A critical point is the reason for rejection. At AHSE every article that is judged acceptable is accepted. I think some other journals may have quotas as to how many they can accept. But it matters not; all articles go through a formal peer review process (unlike the many atrocities committed by some Open Access journals). And at that point, it’s a tough race to win.

All of this suggests that the common practice of submitting to a different journal when you’ve been turned down is completely rational. Your paper may not be quite good enough, or may be a poor fit, or may require additional analysis, or may need a better lit review, or, or… I’ve done it. I even had to do it once when I turned down my own submission to AHSE after a particularly rough ride by reviewers (it was accepted elsewhere).

Having said that, the common notion that every article has a home is simply wrong. A few years ago, Kevin Eva and I studied the fate of articles rejected by Medical Education and AHSE by following up on PubMed, Google etc. for 5 years. Two-thirds never see the light of day. So after a point, cut your losses.

But if you are going to submit elsewhere, please, please, PLEASE take the reviewer’s comments seriously. You may not agree with them, but they deserve careful consideration regardless. In particular, there are many occasions when I, as the editor have uncovered a study with a fatal flaw, so that the conclusions are not justified. I like to hope that the author takes my comment seriously and buries the manuscript, writing it off as a learning experience. But I fear that this presupposes an extreme degree of rationality and detachment, and I fear it is more likely that many manuscripts are just shipped off to another journal in the hope that the next set of reviewers won’t spot the error.

But do we really need to start over again with a new set of reviewers? They are likely drawn from the same subpopulation of folks with some knowledge in the area. In fact, many is the time when the reviewers for the second submission are the same as those for the first.

Now put yourself in my shoes. Assuming you answer question 3 in the affirmative and inform AHSE that the paper was submitted elsewhere and rejected (as you are required to do). As editor I am left with two options:

I could start with a tabula rasa—a clean slate, and send out to find new reviewers. It’s a lottery, right? And everyone deserves to buy some new tickets. But that now doubles the investment of this precious commodity—reviewer time.

Or I could ask you to disclose the previous reviews and how you dealt with them. You don’t have to, but here’s why it’s worth consideration.

Why? Because I have a lot of faith in the review process. It’s NOT a lottery. Most reviewers are really good most of the time. On the other hand, my job as editor is quality control, and I have to be able to recognize when a review has missed the boat and either (a) not noticed a fatal flaw in the study, or (b) got something wrong and claimed an error was committed when it wasn’t. Usually of the form “You did your statistical test. But you should do my statistical test”. In either case, I am satisfied, in fact, proud of the quality of reviews we do (well, not quite all, but most).

So I am loath to write off reviews from a previous journal. If the paper is meritorious, then the author will be able to identify problems and remedy them. If the reviewers were off base, then the author can explain why she thinks they’re wrong. On the other hand, if the paper is not salvageable, hopefully the author will recognize it and not try to find a second home.

In any case, we have decided to offer the option to authors to go the second route. Authors are encouraged to submit previous reviews, and tell us how they dealt with them. You can say how you fixed the problem, or if you don’t want to, you can say why you think the reviewer is off base. And in return we very likely won’t send the paper out for review and you’ll get a decision much quicker. And we all win. You get quicker turnaround, and we don’t hassle a new set of reviewers.

In order to clarify our intent in these submission questions, we have reworded and simplified them as shown in Table 1.

Table 1 Revised questions

I’ve been in this job for 22 years. One thing that amazes me is that there are constant surprises and issues that arise which require adjustments and new policies. I would have thought we would reach a steady state after about 5 years, but that is far from the case. This current epidemic caused me to acquire a whole new set of detective skills. And along the way, I’ve learned a whole bunch of things as editor that, I think, ultimately make for a better journal. Hope you’ve profited from this as well.