Professionalism/Salami Slicing and the Least Publishable Unit

Salami Slicing, or Salami Publication, is the act of slicing academic papers into smaller units so that each paper presents one idea, hypothesis, or finding from the research. These smaller papers are called Least Publishable Units (LPU) due to the fact that they are the smallest amount research that can feasibly be published. The act of salami slicing is controversial in the academic community and currently lies in a grey area of ethical practices. Salami Slicing is different than redundant publication which is concerned with substantial sections of a paper overlapping with an already published work.[1]

BackgroundEdit

The least publishable unit is not a new phenomenon. In his 1981 paper, William J. Broad was one of the first to discuss the idea of the LPU.[2] According to Broad, an associate professor at Harvard in 1958 had 18 papers on his curriculum vitae but just 23 years later, a candidate for the same position and "facing a similar climb" was likely to have 50 to 100 papers in their CV.[2] In an extreme example of salami slicing, 33 separate papers were published from the results of a single mental health study in Iran.[3] Each of Iran's 31 provinces had its own paper, while two summary reports (typically published from studies like these) were also published.[3] In one of the summary reports, the author of the study cited each of the other papers from the study.[3] The number of total published papers that exist are thus inflated. Students and other researchers must sift through the fluff in order to find an abundance of information.[2] Journal reviewers face the difficult task of digging beneath bibliography appearances amid "growing hoards of journals."[2]

With the goal of becoming a professor and earning tenure, academics must perform research and write corresponding papers in order to earn promotions. Theoretically, a greater number of papers suggests a harder-working or more-deserving student. On the flip side, faculty face time constraints when reading and grading papers. This makes it hard to assess the quality of both the publication and the journal in which it is published. Instead, faculty resort to a simple count of the number of publications one has. Thus there exists a "publish or perish" mentality in the world of academia: the need to rapidly publish papers in order to further one's career. This concept facilitates a culture of salami publication and the LPU.

Regardless of its impact on research and academia, the practice of salami publication does have support. Its practice is argued to be justified when the slices in question refer to different hypotheses.[4] Some topics are broad enough to require publication in different contexts.[4] This is also true in cases where the same data set is used in multiple papers. Salami slicing is justified here so long as previous publications are properly referenced.[5] Life of an academic is not a luxurious one, having a paper published can be a much needed confidence boost for some.[6] Salami slicing can also be seen as a means to an end in fulfilling tenure requirements, especially for those academics in-between much larger research projects or whose emphasis exists in helping students to learn.[6]

IssuesEdit

Salami Slicing and LPUs, while generally accepted, have plenty of issues that fuel debate around their use. Separating data over multiple papers makes it difficult to make comparisons across the data or see the big picture. Biehn and Spielmans researched Salami Slicing in clinical trials of the drug Duloxetine and found that many of the papers focused on specific demographics or issues with the drug, many of which had similar results and could have been included in one larger paper. Separating these ideas between papers not only makes it hard to see the overall picture of the drug it also can create the image that more research has been performed on the drug than actually has. In fact, Biehn's and Spielmans's research showed that many of the papers had authors employed by Lily, the producer of Duloxetine, indicating that the appearance of increased research on the drug may be desirable to these authors in order to accelerate approval and adoption of the drug.[7]

Publishing papers is a long and expensive process with the average cost of peer review for the British Medical Journal estimated at £100 per review and the cost of a paper that makes it through the whole system estimated at £1000. Open access journals, which often operate by charging authors fees instead of subscribers, generally charge the author between $500-$2500 for peer review and publishing fees.[8] Time to review an accepted paper is also substantial at around 17 weeks for the average paper in all scientific fields.[9] An unnecessary amount of lpu's along with salami sliced papers may put a strain on the journal system where resources could be better allocated to larger pieces of research.

Salami Slicing puts a strain on the researchers themselves as more focus must be placed on publishing rather than the research itself in true "Publish or Perish" spirit. The Nobel Laureate Peter Higgs, whose work has been essential in advancing particle physics, has said, "Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough", showing that even world class scientists feel encumbered by today's publishing culture.[10] With ever increasing publication counts it can be difficult to stand out in a crowd of researchers and has led to problems at the administrator level where as Arnold Relman, former editor of the New England Journal, said "You have to know the institutions, the people, the meetings... It's a ticklish matter".[2]

ImpactEdit

Salami slicing and LPUs cause unrest in academia. Many publishers work in a "publish or perish" environment, the perceived need to publish quantity over quality in order to earn tenure or find success in their fields.[11] This culture encourages LPUs, which in turn create the culture; the two social forces thus drive a positively reinforcing loop. Doing so dilutes information in articles, leading to empty papers that are tiring to read. Because it takes so much time and so many resources to peer review papers, diluted material is more easily published by flying under the peer-review radar. Less and less papers, therefore, pass the "so-what" threshold.

When many papers cite the same data and cross-reference each other, they create a network of legitimacy. The number of times a study has been cited is a commonly used measure of the study's success. When low-quality data passes through relaxed peer reviews and greedy researchers publish as much as they can, studies that should not have much impact are frequently cited and develop their own false sense of legitimacy. This over-representation of results creates statistical independence that create biases in papers analyzing these results.[12]

Data show that in many fields, producing highly cited (impactful) papers correlates well with producing many papers.[13] Factors that support these data are authors' repeated publications to gain more experience and the increased chances of having a paper become highly cited. It strongly increases the occurrence of breakthroughs and important inventions, as would be expected from a theoretical perspective on scientific creativity.[14]

SolutionsEdit

The Committee on Publication Ethics (COPE) provides guidance to editors and publications on ethical practices through cases and flowcharts referencing scenarios they may encounter.[15] COPE does not condemn the use of salami slicing but does clearly define it as papers that cover the same population, methods, and question. COPE's definition of salami slicing helps clearly differentiate it from redundant publication and splitting up papers by outcome both of which they consider illegitimate. According to COPE, decisions on salami sliced papers should be left up the publisher; however, by better educating editors and publishers COPE is advocating for higher quality publications which may include less salami sliced publications.[1]

There have been many proposed solutions to salami slicing and LPUs, none of which have really been put into effect. One idea focusing on the institution side is to limit the number of publications a researcher can submit per year in their activity report to be used in evaluating their work.[16] This would remove the incentive associated with artificially increasing publication counts, allow researchers to highlight their most important work, and reduce workload on administrators.

One unique solution would make it so for every paper a researcher publishes they must peer-review three other researchers papers. This may disincentivize researchers from publishing a large amount of lpu's as this would only increase their workload further. This could also have a secondary effect of improving the peer-review system by making reviewers more readily available and less over-worked which could help decrease the number of lpu's and salami sliced papers and oher low quality papers that make it through the review process.[16]

Salami sliced papers can be beneficial as they can present information in short digestible chunks. At the same time they can make learning more difficult for students; Raymond Owens, of the California Institute of Technology, said, "students confronted with a half-dozen short papers have a hard time seeing the forest for the trees".[2] If journals were to better organize these many salami sliced papers by creating direct links in databases between the papers and making clear the relationship to the other papers, the 'forest' would become more visible reducing many issues associated with salami slicing.

Related IssuesEdit

Co-authorship can become extreme. In order to expand others' curriculum vitae, authors will publish papers with irrationally long lists of authors. A 2015 article researching the Higgs-Boson particle listed 5,154 authors (21 of whom are deceased), in total comprising 24 of its 33 pages.[17] In some cases of excessive co-authorship, authors' contributions are as trivial as conversations had in elevators.[2] Hyperauthorship makes affixing responsibility to contributors beyond the first and last authors nearly impossible; authorship is effectively rendered useless.[18]

Similarly, the act of salami slicing in publication of research has expanded into slicing the research itself. The least interesting unit, or LIU, is a topic that is "just interesting enough" to warrant research.[19] Like salami publication, the concept of the LIU also places greater emphasis on the number of papers academics must publish, a notion that only enhances the "publish or perish" concept plaguing academia.[19] The LIU raises ethical questions over careerism and whether research should be viewed as a method to advance ones career or to improve the field itself. Facing the overwhelming concern of publishing, academics pursue the LIU while leaving grand ideas behind.[19] Out of the fear that such a grand idea may be wrong, researchers choose the safe option at the detriment of science itself. Salami slicing of research hurts research financially too. Resources are allocated to "not interesting" topics that, due to their insignificance, are unlikely to be approved by top journals.[19]

ReferencesEdit

  1. a b COPE. (2005). Salami publication. Retrieved April 25, 2019, from https://publicationethics.org/case/salami-publication
  2. a b c d e f g Broad, W. J. (March 13, 1981). The publishing game: Getting more for less. Science, 211(4487), 1137-1139.
  3. a b c Neuroskeptic (March 3, 2018). Scientific salami slicing: 33 Papers from 1 study. Retrieved from http://blogs.discovermagazine.com/neuroskeptic/2018/03/03/salami-slicing-32-papers/#.XMehpuhKiUm
  4. a b Menon, V. & Muraleedharan, A. (2016). Salami slicing of data sets: What the young researcher needs to know. Indian Journal of Psychological Medicine, 38(6), 577-578. doi:10.4103/0253-7176.194906
  5. Tolsgaard, M. G., Ellaway, R., Woods N., & Norman, G. (February 12, 2019). Salami-slicing and plagiarism: How should we respond?. Advances in Health Sciences Education, 24(3). dot:10.1007/s10459-019-09876-7
  6. a b Owen, W. J. (February 9, 2004). In defense of the least publishable unit. The Chronicle of Higher Education. https://www.chronicle.com/article/In-Defense-of-the-Least/44761
  7. Spielmans, G. I., Biehn, T. L., & Sawrey, D. L. (2009). A Case Study of Salami Slicing: Pooled Analyses of Duloxetine for Depression. https://doi.org/10.1159/000270917
  8. Smith R. (2006). Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99(4), 178–182. doi:10.1258/jrsm.99.4.178
  9. Huisman, J., & Smits, J. (2017). Duration and quality of the peer review process: the author’s perspective. Scientometrics, 113(1), 633–650. https://doi.org/10.1007/s11192-017-2310-5
  10. Aitkenhead, D. (2013). Peter Higgs: I wouldn’t be productive enough for today’s academic system. Retrieved April 23, 2019, from https://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-academic-system
  11. Dupps, William J, and J Bradley Randleman. “The Perils of the Least Publishable Unit.” Journal of Refractive Surgery, vol. 28, no. 9, 2012, pp. 601–602.
  12. Budd, John M., and Kristine N. Stewart. “Is There Such a Thing as ‘Least Publishable Unit’? An Empirical Investigation.” Ibres, vol. 25, no. 2, 2015, pp. 78–85.
  13. Sandström, Ulf, and Peter Van Den Besselaar. “Quantity and/or Quality? The Importance of Publishing Many Papers.” Quantity and/or Quality? The Importance of Publishing Many Papers, vol. 11, no. 11, 2016, doi:10.1371/journal.pone.0166149.
  14. Uzzi B, Mukherjee S, Stringer M, Jones B. Atypical Combinations and Scientific Impact. Science 2013: 468. doi: 10.1126/science.1240474 PMID 24159044
  15. COPE. (n.d.). About COPE | Committee on Publication Ethics: COPE. Retrieved April 25, 2019, from https://publicationethics.org/about/our-organisation
  16. a b Siegel, D., & Baveye, P. (2010). Battling the Paper Glut. Science, 329(5998), 1466 LP-1466. https://doi.org/10.1126/science.329.5998.1466-a
  17. Aad, G. et al. (ATLAS Collaboration, CMS Collaboration) Phys. Rev. Lett. 114, 191803 (2015). doi: 10.1103/PhysRevLett.114.191803
  18. Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices? Journal of the American Society for Information Science and Technology, 52(7), 558-569. doi:10.1002/asi.1097.abs
  19. a b c d Cabbolet, M. J. T. F. (2016). The least interesting unit: A new concept for enhancing one's academic career opportunities. Science and Engineering Ethics, 22, 1837-1841. doi:10.1007/s11948-015-9736-z