« Design and Analysis of Algorithms (CSCI 5454) | Main | If birds are this smart, how smart were dinosaurs? »
July 01, 2013
Small science for the win? Maybe not.
(A small note: with the multiplication of duties and obligations, both at home and at work, writing full-length blog entries has shifted to a lower priority relative to writing full-length scientific articles. Blogging will continue! But selectively so. Paper writing is moving ahead nicely.)
In June of this year, a paper appeared in PLOS ONE [1] that made a quantitative argument for public scientific funding agencies to pursue a "many small" approach in dividing up their budgets. The authors, Jean-Michel Fortin and David Currie (both at the University of Ottawa) argued from data that scientific impact is a decreasing function of total research funding. Thus, if a funding agency wants to maximize impact, it should fund lots of smaller or moderate-sized research grants rather than fund a small number of big grants to large centers or consortia. This paper made a big splash among scientists, especially the large number of us who rely on modest-sized grants to do our kind of science.
Many of us, I think, have felt in our guts that the long-running trend among funding agencies (especially in the US) toward making fewer but bigger grants was poisonous to science [2], but we lacked the data to support or quantify that feeling. Fortin and Currie seemed to provide exactly what we wanted: data showing that big grants are not worth it.
How could such a conclusion be wrong? Despite my sympathy, I'm not convinced they have it right. The problem is not the underlying data, but rather overly simplistic assumptions about how science works and fatal confounding effects.
Fortin and Currie ask whether certain measures of productivity, specifically number of publications or citations, vary as a function of total research funding. They fit simple allometric or power functions [3] to these data and then looked at whether the fitted function was super-linear (increasing return on investment; Big Science = Good), linear (proportional return; Big Science = sum of Small Science) or sub-linear (decreasing return; Big Science = Not Worth It). In every case, they identify a sub-linear relationship, i.e., increasing funding by X% yields less a progressively smaller proportional increase in productivity. In short, a few Big Sciences are less impactful and less productive than many Small Sciences.
But this conclusion only follows from the data if you believe that all scientific questions are equivalent in size and complexity, and that any paper could in principle be written by a research group of any funding level. These beliefs are certainly false, and this implies that the conclusions don't follow from the data.
Here's an illustration of the first problem. In the early days of particle physics, small research groups made significant progress because the equipment was sufficiently simple that a small team could build and operate it, and because the questions of scientific interest were accessible using such equipment. Small Science worked well. But today the situation is different. The questions of scientific interest are largely only accessible with enormously complex machines, run by large research teams supported by large budgets. Big Science is the only way to make progress because the tasks are so complex. Unfortunately, this fact is lost in Fortin and Currie's analysis because their productivity variables do not expose the difficulty of the scientific question being investigated, and are likely inversely correlated with difficulty.
This illustrates the second problem. The cost of answering a question seems largely independent of the number of papers required to describe the answer. There will only be a handful of papers published describing the experiments that discovered the Higgs boson, even though its discovery was possibly one of the costliest physics experiments so far. Furthermore, it is not clear that the few papers produced by a big team should necessarily be highly cited, as citations are produced by the publication of new papers, and if there are only a few research teams working in a particularly complex area, the number of new citations is not independent of the size of the teams.
In fact, by defining "impact" as counts related to paper production [4,5], Fortin and Currie may have inadvertently engineered the conclusion that Small Science will maximize impact. If project complexity correlates positively with grant size and negatively with paper production, then a decreasing return in paper/citation production as a function of grant size is almost sure to appear in any data. So while the empirical results seem correct, they are of ambiguous value. Furthermore, a blind emphasis on funding researchers to solve simple tasks (small grants), to pick all that low-hanging fruit people talk about, seems like an obvious way to maximize paper and citation production because simpler tasks can be solved faster and at lower cost than harder, more complex tasks. You might call this the "Washington approach" to funding, since it kicks the hard problems down the road, to be funded, maybe, in the future.
From my own perspective, I worry about the trend at funding agencies away from small grants. As Fortin and Currie point out, research has shown that grant reviewers are notoriously bad at predicting success [6] (and so are peer reviewers). This fact alone is a sound argument for a major emphasis on funding small projects: making a larger number of bets on who will be successful brings us closer to the expected or background success rate and minimizes the impact of luck (and reduces bias due to collusion effects). A mixed funding strategy is clearly the only viable solution, but what kind of mix is best? How much money should be devoted to big teams, to medium teams, and to small teams? I don't know, and I don't think there is any way to answer that question purely from looking at data. I would hope that the answer is chosen not only by considering things like project complexity, team efficiency, and true scientific impact, but also issues like funding young investigators, maintaining a large and diverse population of researchers, promoting independence among research teams, etc. We'll see what happens.
Update 1 July 2013: On Twitter, Kieron Flanagan asks "can we really imagine no particle physics w/out big sci... or just the kind of particle physics we have today?" To me, the answer seems clear: the more complex the project, the legitimately bigger the funding needs are. That funding manages the complexity, keeps the many moving parts moving together, and ultimately ensures a publication is produced. This kind of coordination is not free, and those dollars don't produce additional publications. Instead, their work allows for any publication at all. Without them, it would be like dividing up 13.25 billion dollars among about 3,000 small teams of scientists and expecting them to find the Higgs, without pooling their resources. So yes, no big science = no complex projects. Modern particle physics requires magnificently complex machines, without which, we'd have some nice mathematical models, but no ability to test them.
To be honest, physics is perhaps too good an example of the utility of big science. The various publicly-supported research centers and institutes, or large grants to individual labs, are a much softer target: if we divided up their large grants into a bunch of smaller ones, could we solve all the same problems and maybe more? Or, could we solve a different set of problems that would more greatly advance our understanding of the world (which is distinct from producing more papers or more citations)? This last question is hard because it means choosing between questions, rather than being more efficient at answering the same questions.
-----
[1] J.-M. Fortin and D.J. Currie, Big Science vs. Little Science: How Scientific Impact Scales with Funding. PLoS ONE 8(6): e65263 (2013).
[2] By imposing large coordination costs on the actual conduct of science, by restricting the range of scientific questions being investigated, by increasing the cost of failure to win a big grant (because the proposals are enormous and require a huge effort to produce), by channeling graduate student training into big projects where only a few individuals actually gain experience being independent researchers, etc. Admittedly, these claims are based on what you might call "domain expertise," rather than hard data.
[3] Functions of the form log y = A*log x + B, which looks like a straight line on a log-log plot. These bivariate functions can be fitted using standard regression techniques. I was happy to see the authors include non-parametric fits to the same data, which seem to mostly agree with the parametric fits.
[4] Citation counts are just a proxy for papers, since only new papers can increase an old paper's count. Also, it's well known that the larger and more active fields (like biomedical research or machine learning) produce more citations (and thus larger citation counts) than smaller, less active fields (like number theory or experimental particle physics). From this perspective, if a funding agency took Fortin and Currie's advice literally, they would cut funding completely for all the "less productive" fields like pure mathematics, economics, computer systems, etc., and devote that money to "more productive" fields like biomedical research and whoever publishes in the International Conference on World Wide Web.
[5] Fortin and Currie do seem to be aware of this problem, but proceed anyway. Their sole caveat is simply thus: "Campbell et al. [12] discuss caveats of the use of bibliometric measures of scientific productivity such as the ones we used."
[6] For instance, see Berg, Nature 489, 203 (2012).
posted July 1, 2013 03:50 PM in Scientifically Speaking | permalink