« March 2012 | Main | May 2012 »

April 01, 2012

Things to read while the simulator runs (higher ed. edition)

1.
Reforming Science: Methodological and Cultural Reforms by Arturo Casadevall and Ferric C. Fang

An opinion piece published by the American Society for Microbiology (with a popular summary here, titled Has Modern Science Become Dysfunctional?) about the increasingly perverse incentives in modern science. Despite the implication of the title, the authors are not arguing that science is failing to produce new or useful knowledge, merely that the social system of rewards discourages the type of curiosity-driven investigation that drove the dazzling scientific discoveries of the 20th century. On this point, I am highly sympathetic.

To be successful, today's scientists must often be self-promoting entrepreneurs whose work is driven not only by curiosity but by personal ambition, political concerns, and quests for funding.

They go on to argue that this hyper-competitive system does not, in fact, produce the best possible science because scientists choose projects based on their likelihood of increasing their payoffs within these narrow domains (more grant dollars, more high profile publications, etc.), rather than on their likelihood to produce genuine scientific progress. In short, scientific progress only sometimes aligns with ambition, politics or funding priorities, but behavior almost always well, eventually.

This line resonated particularly strongly. When my non-academic friends ask me how I like being a professor, I often describe my first two years in exactly these terms: it's like running a startup company (nonstop fund raising, intense competition, enormous pressure) with all "temp" labor (students have to be trained, and then they leave after a few years). There are certainly joys associated with the experience, but it is a highly stressful one, and I am certainly cognizant of the reward structures currently in place.

2.
Psychology's Bold Initiative, by Siri Carpenter

Continuing the above theme, Carpenter writes about particular issues within Psychology. The problem is known as the "file-drawer problem" in which negative results tend not to be published. Combine that with noisy results from small sample sized experiments and you have a tendency for statistical flukes to be published in high profile journals as if they were facts. Carpenter describes an interesting community-based effort called the PsychFileDawer to push back against this pattern. The idea is to provide a venue for studies that only try to replicate existing claims, rather than focus on novel results in experimental psychology. Carpenter's coverage is both thoughtful and encouraging.

That being said, experimental psychology seems better positioned to succeed with something like this than, say, complex systems. "Complex systems" as a field (or even network science, if you prefer that) is so broad that well defined communities of dedicated, objective researchers have not really coalesced around specific sets of questions, a feature that seems necessary for there to be any incentive to check the results of published studies.

3.
Our Secret Nonacademic Histories, by Alexandra M. Lord, and
For Science Ph.D.'s, There Is No One True Path, by Jon Bardin.
Both in the Chronicle of Higher Education.

These pieces cover parts of the larger discussion about the crisis in higher education. The first is more about the humanities (which seem to have a greater disdain for non-academic careers than the sciences?), while the second focuses more on the benefits of getting a science PhD, in terms of intellectual sophistication, problem solving, project completion, etc., even if your career trajectory takes you outside of science. I'm highly sympathetic to the latter idea, and indeed, my impression is that many of the most exciting job opportunities at technology companies really demand the kind of training that only a PhD in a scientific or technical field can give you.

Update 6 April: Regarding the point in #2 about complex systems, perhaps I was too hasty. What I'd meant to suggest was that having a community of researchers all interested in answering the same basic questions seems like a sufficient condition for science to be productive of genuinely new knowledge. In other words, the best way to make forward progress is to have many critical eyes all examining the problem from multiple, and some redundant, angles, publishing both new and repeated results via peer review [1]. But, this statement seems not to be true.

Cancer research can hardly be said to have a dearth of researchers, and yet a new editorial written by a former head of research at the big biotech firm Amgen and an oncology professor at the University of Texas argues that the majority of 'landmark' (their term) studies in oncology, many of which were published in the top journals and many of which spawned entire sub-disciplines with hundreds of followup papers, cannot be reproduced.

First, we know that a paper being peer reviewed and then published does not imply that its results are correct. Thus, that 47 out of 53 results could not be reproduced is not by itself worrying. But, what makes it a striking statement is that these results were chosen for testing because they are viewed as very important or influential and that many of them did generate ample followup studies. That is, something seems to have interfered with the self-corrective ideal of the scientific community that scientists are taught in graduate school, even in a field as big as cancer research.

Derek Lowe provides some nice commentary on the article, and points a popular press story that includes some additional comments by Begley, the former Amgen researcher. The important point is the Begley's position at Amgen provided him with the resources necessary to actually check the results of many of the studies [2]. Here's the money quote from the popular piece:

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

"The best story." Since when has science been about the best story? Well, since always. The problem, I think, is not so much the story telling, but rather the emphasis on the story at the expense of the scientific, i.e., objectively verifiable, knowledge. It's not clear to me who in the peer-review pipeline, at the funding agencies or in the scientific community at large should be responsible for discouraging such a misplaced emphasis, but it does seem to be a problem, not just in young and diffuse fields like complex systems (or complex networks), but also in big and established fields like cancer research.

-----

[1] Being published after peer review is supposed to mean that there are no obviously big mistakes. But, in practice, peer review is more of an aspirational statement, and passing it does not necessarily imply that there are no mistakes, even big ones, or obvious ones. In short, peer review is a human process, complete with genuine altruism, social pressures, "bad hair days," weird incentives and occasional skullduggery. The hope is that it works, on average, over the long run. Like democracy is for forms of government, peer review may be the worst way to vet scientific research, except for all the others.

[2] Can you imagine the NIH funding such a big effort to check the results of so many studies? I cannot.

posted April 1, 2012 09:37 AM in Things to Read | permalink | Comments (1)