« 2015: a year in review | Main | 2016: a year in review »

March 07, 2016

The trouble with modern interdisciplinary publishing

You may have heard about the recent debacle at PLOS ONE, in which the journal published [1] and then retracted [2] a paper with the following, seemingly creationist line in the abstract (boldface added):

The explicit functional link indicates that the biomechanical characteristic of tendinous connective architecture between muscles and articulations is the proper design by the Creator to perform a multitude of daily tasks in a comfortable way.

The reference to the Creator also appears in the introduction and in the discussion. Jon Wilkins has written good summaries of some of the details, both of the original publication of the paper, and of its subsequent retraction by the PLOS ONE editorial staff. An interesting point he makes is that the use of the English phrase "the Creator" can plausibly be attributed to poor translation from Chinese; an alternate translation could thus be "nature", although that doesn't ameliorate the creationist subtext that is also carried by the phrase "proper design." [3]

Anyway, I'm not interested in second guessing the editorial and authorial sequences of actions that led to the initial publication or its retraction. And, it having happened probably won't impact my own inclination for or against publishing something in PLOS ONE, since I think that the editors and papers at PLOS ONE are generally okay.

Instead, I think this event serves to illustrate a deeper and quite important issue with modern academic publishing, especially interdisciplinary journals like PLOS ONE. The core question is this: how do we scientists, as a community, ensure quality control in a system where we are drowning in the sheer volume of everything? [4]

For instance, vanity journals like Nature and Science don't let little obvious things like this through in part because they employ copy editors whose job it is to proofread and copyedit articles before they are published. But, these journals do let plenty of other, shinier crap through, for which they are properly mocked. But how does the crap get through? I think the answer is partly due to the enormous volume of submissions, which necessarily means that individual submissions often are not looked at very closely or, if they are, they are often not looked at by the right experts. PLOS ONE has, if anything, an even larger volume problem than the vanity journals. Not only did they publish something like 30,000 papers in 2015, but they also have more than 6000 academic editors. The complexity of ensuring a good matching process between papers and editors with appropriate expertise is dizzying. The fact that this enormous machine works as well as it does is pretty remarkable: PLOS ONE is probably easily the world's largest journal, both in volume of submissions, volume of published papers, number of editors, and range of topics covered. It's also a very young journal, having begun publishing only in 2006.

Here's the crux of the matter. Science publishing as we know it was invented when science was run by a pretty small community of fairly like-minded individuals. As it's grown over the past 350 years [5], science has become increasingly specialized, and the specialization trend really kicked in during the 20th century, when governments realized that funding science was politically advantageous (especially for war). Specialization is often held up as a bugaboo in modern discourse, but it's a completely natural process in science, and I think it advanced so much during the 20th century in part because it helped keep the peer review and publication systems small and manageable. That is, specialization is a way to solve the matching process of manuscripts with editors, reviewers and journals.

But, interdisciplinary journals like PLOS ONE, and its shinier brethren like Science Advances [6] and Nature Communications, are racing headlong into territory where the publication system has never functioned before. From a simple logistics point of view, it's not clear that we know how to scale up what is essentially an early 20th century evaluation process [7], which only worked because of specialization, to handle 21st century volumes for interdisciplinary science, and still feel good about the product (effectively peer reviewed papers, for any reasonable definition of peer review).

So, I think it's good that we have places like PLOS ONE who are experimenting in this area. And, I think they rightly deserve our ridicule when they screw up royally, as in the case of this "Creator" paper. But let's not shame them into stopping, e.g., by boycotting them, when they are quite a bit more brave than we are. Instead, I think we as a community need to think harder about the real problem, which is about how to run an effective scientific publishing operation in a world where we cannot rely on specialization to manage the way we collectively allocate the attention of experts.

-----

[1] Ming-Jin Liu, Cai-Hua Xiong, Le Xiong, and Xiao-Lin Huang. "Biomechanical Characteristics of Hand Coordination in Grasping Activities of Daily Living." PLOS ONE 11(1): e0146193 (2016).

[2] The PLOS ONE Staff. "Retraction: Biomechanical Characteristics of Hand Coordination in Grasping Activities of Daily Living." PLoS ONE 11(3): e0151685 (2016).

[3] It seems possible that the authors could have rewritten the offending parts of the paper in order to make it more clear that they mean these features evolved naturally. However, PLOS did not, apparently, give them that option.

[4] The problems that high volumes induce are pretty fundamental ubiquitous in academia. The most precious resource I have as a scientist is my attention, and it is very limited. And yet, there are an ever increasing number of things that need attending to. The number of students in undergraduate CS programs is increasing; the number of applicants to PhD programs is increasing; the number of applicants to faculty openings is increasing; the number of papers needing to be reviewed is increasing; the number of papers I would like to read in order to stay up on the literature is increasing; the number of meetings I should probably be attending or organizing is increasing; etc. etc. This is a place where technology should be helping us focus on things that are important, but instead, it's just made everything worse, by lowering barriers. Email is the perfect example of this.

[5] Arguably, science as a field was first formalized in the founding of the Royal Society in 1660.

[6] Full disclosure: I am currently an Associate Editor at Science Advances. We have our own version of the volume+diversity problem, and we have our own method for managing it. The method requires a lot of human attention in sorting things into buckets, and I have great respect for our sorters (who are also academics, like all the associate editors). But how scalable is our system? What mistakes does it tend to make? How could we lower the error rate? I don't think anyone knows the answers to these questions, and I am not sure anyone is even thinking that these are questions that need to be answered.

[7] Peer review was mainly carried out directly by the senior-most editors until the late 19th century, and it wasn't until the early 20th century that the process we recognize today, with external reviewers, took shape.

posted March 7, 2016 11:14 AM in Interdisciplinarity | permalink

Comments

I think you got straight to the point early on in the article, and in my opinion moved away afterwards: the real issue may not be the interdisciplinary nature of modern research - at least not as much as it's our limited attention and the overload of information and tasks. My personal experience is that I accept to review more papers, and join more program committees, that I should -- simply due to the ethical reasoning that someone has to review the material produced by my peers and it may very well be me doing some extra efforts. However I fully realize that my reviews now sometimes don't have the same quality/depth of say those when I was a grad student (because I had much less overload of tasks and duties).

The solution is pretty simple, yet pragmatically unachievable: de-incentivize the overproduction of scientific contributions at the systemic level. If our academic system would not incentivize publishing more and more, we would witness a much lesser overload of information and tasks related to maintaining the system running. If we would value having less publications, yet more impactful ones, and evaluate candidates (at all levels, across domains) accordingly, we would then achieve a much more optimal (at least in information theoretical terms) system. But an oiled system is hard to revert, and if anything we are going exactly in the opposite direction: we produce more and more because each new paper is another shot to popularity (as recent science of science research by Roberta Sinatra, Dashun Wang, Laszlo Barabasi, etc. clearly shows); in fact, we produce more so that we can try to shop each paper to more top venues, "trying the luck" that it will indeed get reviewed in a vanity journal.

Posted by: Emilio Ferrara at March 7, 2016 09:30 AM