« January 2013 | Main | August 2013 »

July 04, 2013

If birds are this smart, how smart were dinosaurs?

I continue to be fascinated and astounded by how intelligent birds are. A new paper in PLOS ONE [1,2,3] by Auersperg, Kacelnik and von Bayern (at Vienna, Oxford and the Max-Planck-Institute for Ornithology, respectively) uses Goffin’s Cockatoos to demonstrate that these birds are capable of learning and performing a complex sequence of tasks in order to get a treat. Here's the authors describing the setup:

Here we exposed Goffins to a novel five-step means-means-end task based on a sequence of multiple locking devices blocking one another. After acquisition, we exposed the cockatoos to modifications of the task such as reordering of the lock order, removal of one or more locks, or alterations of the functionality of one or more locks. Our aim was to investigate innovative problem solving under controlled conditions, to explore the mechanism of learning, and to advance towards identifying what it is that animals learn when they master a complex new sequential task.

The sequence was sufficiently long that the authors argue the goal the cockatoos were seeking must have been an abstract goal. The implication is that the cockatoos are capable of a kind of abstract, goal-oriented planning and problem solving that is similar to what humans routinely exhibit. Here's what the problem solving looks like:

The authors' conclusions put things nicely into context, and demonstrate the appropriate restraint with claims about what is going on inside the cockatoos' heads (something we humans could do better at in interpreting each others' behaviors):

The main challenge of cognitive research is to map the processes by which animals gather and use information to come up with innovative solutions to novel problems, and this is not achieved by invoking mentalistic concepts as explanations for complex behaviour. Dissecting the subjects’ performance to expose their path towards the solution and their response to task modifications can be productive; even extraordinary demonstrations of innovative capacity are not proof of the involvement of high-level mental faculties, and conversely, high levels of cognition could be involved in seemingly simple tasks. The findings from the transfer tests allow us to evaluate some of the cognition behind the Goffins’ behaviour. Although the exact processes still remain only partially understood, our results largely support the supposition that subjects learn by combining intense exploratory behavior, learning from consequences, and some sense of goal directedness.

So it seems that the more we learn about birds, the more remarkably intelligent some species appear to be. Which begs a question: if some bird species are so smart, how smart were dinosaurs? Unfortunately, we don't have much of an idea because we don't know how much bird brains have diverged from their ancestral non-avian dinosaur form. But, I'm going to go out on a limb here, and suggest that they may have been just as clever as modern birds.

-----

[1] Auersperg, Kacelnik and von Bayern, Explorative Learning and Functional Inferences on a Five-Step Means-Means-End Problem in Goffin’s Cockatoos (Cacatua goffini) PLOS ONE 8(7): e68979 (2013).

[2] Some time ago, PLOS ONE announced that they were changing their name from "PLoS ONE" to "PLOS ONE". But confusingly, on their own website, they give the citation to new papers as "PLoS ONE". I still see people use both, and I have a slight aesthetic preference for "PLoS" over "PLOS" (a preference perhaps rooted in the universal understanding that ALL CAPS is like yelling on the Interwebs, and every school child knows that yelling for no reason is rude).

[3] As it turns out these same authors have some other interesting results with Goffin's Cockatoos, including tool making and use. io9 has a nice summary, plus a remarkable video of the tool-making in action.

posted July 4, 2013 08:07 PM in Obsession with birds | permalink | Comments (2)

July 01, 2013

Small science for the win? Maybe not.

(A small note: with the multiplication of duties and obligations, both at home and at work, writing full-length blog entries has shifted to a lower priority relative to writing full-length scientific articles. Blogging will continue! But selectively so. Paper writing is moving ahead nicely.)

In June of this year, a paper appeared in PLOS ONE [1] that made a quantitative argument for public scientific funding agencies to pursue a "many small" approach in dividing up their budgets. The authors, Jean-Michel Fortin and David Currie (both at the University of Ottawa) argued from data that scientific impact is a decreasing function of total research funding. Thus, if a funding agency wants to maximize impact, it should fund lots of smaller or moderate-sized research grants rather than fund a small number of big grants to large centers or consortia. This paper made a big splash among scientists, especially the large number of us who rely on modest-sized grants to do our kind of science.

Many of us, I think, have felt in our guts that the long-running trend among funding agencies (especially in the US) toward making fewer but bigger grants was poisonous to science [2], but we lacked the data to support or quantify that feeling. Fortin and Currie seemed to provide exactly what we wanted: data showing that big grants are not worth it.

How could such a conclusion be wrong? Despite my sympathy, I'm not convinced they have it right. The problem is not the underlying data, but rather overly simplistic assumptions about how science works and fatal confounding effects.

Fortin and Currie ask whether certain measures of productivity, specifically number of publications or citations, vary as a function of total research funding. They fit simple allometric or power functions [3] to these data and then looked at whether the fitted function was super-linear (increasing return on investment; Big Science = Good), linear (proportional return; Big Science = sum of Small Science) or sub-linear (decreasing return; Big Science = Not Worth It). In every case, they identify a sub-linear relationship, i.e., increasing funding by X% yields less a progressively smaller proportional increase in productivity. In short, a few Big Sciences are less impactful and less productive than many Small Sciences.

But this conclusion only follows from the data if you believe that all scientific questions are equivalent in size and complexity, and that any paper could in principle be written by a research group of any funding level. These beliefs are certainly false, and this implies that the conclusions don't follow from the data.

Here's an illustration of the first problem. In the early days of particle physics, small research groups made significant progress because the equipment was sufficiently simple that a small team could build and operate it, and because the questions of scientific interest were accessible using such equipment. Small Science worked well. But today the situation is different. The questions of scientific interest are largely only accessible with enormously complex machines, run by large research teams supported by large budgets. Big Science is the only way to make progress because the tasks are so complex. Unfortunately, this fact is lost in Fortin and Currie's analysis because their productivity variables do not expose the difficulty of the scientific question being investigated, and are likely inversely correlated with difficulty.

This illustrates the second problem. The cost of answering a question seems largely independent of the number of papers required to describe the answer. There will only be a handful of papers published describing the experiments that discovered the Higgs boson, even though its discovery was possibly one of the costliest physics experiments so far. Furthermore, it is not clear that the few papers produced by a big team should necessarily be highly cited, as citations are produced by the publication of new papers, and if there are only a few research teams working in a particularly complex area, the number of new citations is not independent of the size of the teams.

In fact, by defining "impact" as counts related to paper production [4,5], Fortin and Currie may have inadvertently engineered the conclusion that Small Science will maximize impact. If project complexity correlates positively with grant size and negatively with paper production, then a decreasing return in paper/citation production as a function of grant size is almost sure to appear in any data. So while the empirical results seem correct, they are of ambiguous value. Furthermore, a blind emphasis on funding researchers to solve simple tasks (small grants), to pick all that low-hanging fruit people talk about, seems like an obvious way to maximize paper and citation production because simpler tasks can be solved faster and at lower cost than harder, more complex tasks. You might call this the "Washington approach" to funding, since it kicks the hard problems down the road, to be funded, maybe, in the future.

From my own perspective, I worry about the trend at funding agencies away from small grants. As Fortin and Currie point out, research has shown that grant reviewers are notoriously bad at predicting success [6] (and so are peer reviewers). This fact alone is a sound argument for a major emphasis on funding small projects: making a larger number of bets on who will be successful brings us closer to the expected or background success rate and minimizes the impact of luck (and reduces bias due to collusion effects). A mixed funding strategy is clearly the only viable solution, but what kind of mix is best? How much money should be devoted to big teams, to medium teams, and to small teams? I don't know, and I don't think there is any way to answer that question purely from looking at data. I would hope that the answer is chosen not only by considering things like project complexity, team efficiency, and true scientific impact, but also issues like funding young investigators, maintaining a large and diverse population of researchers, promoting independence among research teams, etc. We'll see what happens.

Update 1 July 2013: On Twitter, Kieron Flanagan asks "can we really imagine no particle physics w/out big sci... or just the kind of particle physics we have today?" To me, the answer seems clear: the more complex the project, the legitimately bigger the funding needs are. That funding manages the complexity, keeps the many moving parts moving together, and ultimately ensures a publication is produced. This kind of coordination is not free, and those dollars don't produce additional publications. Instead, their work allows for any publication at all. Without them, it would be like dividing up 13.25 billion dollars among about 3,000 small teams of scientists and expecting them to find the Higgs, without pooling their resources. So yes, no big science = no complex projects. Modern particle physics requires magnificently complex machines, without which, we'd have some nice mathematical models, but no ability to test them.

To be honest, physics is perhaps too good an example of the utility of big science. The various publicly-supported research centers and institutes, or large grants to individual labs, are a much softer target: if we divided up their large grants into a bunch of smaller ones, could we solve all the same problems and maybe more? Or, could we solve a different set of problems that would more greatly advance our understanding of the world (which is distinct from producing more papers or more citations)? This last question is hard because it means choosing between questions, rather than being more efficient at answering the same questions.

-----

[1] J.-M. Fortin and D.J. Currie, Big Science vs. Little Science: How Scientific Impact Scales with Funding. PLoS ONE 8(6): e65263 (2013).

[2] By imposing large coordination costs on the actual conduct of science, by restricting the range of scientific questions being investigated, by increasing the cost of failure to win a big grant (because the proposals are enormous and require a huge effort to produce), by channeling graduate student training into big projects where only a few individuals actually gain experience being independent researchers, etc. Admittedly, these claims are based on what you might call "domain expertise," rather than hard data.

[3] Functions of the form log y = A*log x + B, which looks like a straight line on a log-log plot. These bivariate functions can be fitted using standard regression techniques. I was happy to see the authors include non-parametric fits to the same data, which seem to mostly agree with the parametric fits.

[4] Citation counts are just a proxy for papers, since only new papers can increase an old paper's count. Also, it's well known that the larger and more active fields (like biomedical research or machine learning) produce more citations (and thus larger citation counts) than smaller, less active fields (like number theory or experimental particle physics). From this perspective, if a funding agency took Fortin and Currie's advice literally, they would cut funding completely for all the "less productive" fields like pure mathematics, economics, computer systems, etc., and devote that money to "more productive" fields like biomedical research and whoever publishes in the International Conference on World Wide Web.

[5] Fortin and Currie do seem to be aware of this problem, but proceed anyway. Their sole caveat is simply thus: "Campbell et al. [12] discuss caveats of the use of bibliometric measures of scientific productivity such as the ones we used."

[6] For instance, see Berg, Nature 489, 203 (2012).

posted July 1, 2013 03:50 PM in Scientifically Speaking | permalink | Comments (0)