July 07, 2006

The popularity contest

Congratulations to friends Cosma Shalizi over at Three-Toed Sloth and Dave Bacon at Quantum Pontiff for making the top 50 of Nature News' scientist's blog ranking! I'm duly impressed with you both, and simply hope that you continue to write interesting stuff at your regular prolific rates.

Cosmic Variance, which I occasionally frequent for my fix of cosmological weirdness, also made the list at #4.

posted July 7, 2006 02:50 AM in Reviews | permalink | Comments (0)

July 06, 2006

An ontological question about complex systems

Although I've been reading Nature News for several years now (as part of my daily trawl for treasure in the murky waters of science), I first came to recognize one of their regular writers Philip Ball when he wrote about my work on terrorism with Maxwell Young. His essay, now hidden behind Nature's silly subscription-only barrier, sounded an appropriately cautionary note about using statistical patterns of human behavior to predict the future, and was even titled "Don't panic, it might never happen."

The idea that there might be statistical laws that govern human behavior can be traced, as Ball does in his essay, back to the English philosopher Thomas Hobbes (1588-1679) in The Leviathan and to the French positivist philosopher Auguste Comte (1798-1857; known as the father of sociology, and who also apparently coined the term "altruism"), who were inspired by the work of physicists in mechanizing the behavior of nature to try to do the same with human societies.

It seems, however, that somewhere between then and now, much of sociology has lost interest in such laws. A good friend of mine in graduate school for sociology (who shall remain nameless to protect her from the politics of academia) says that her field is obsessed with the idea that context, or nurture, drives all significant human behavior, and that it rejects the idea that overarching patterns or laws of society might exist. These, apparently, are the domain of biology, and thus Not Sociology. I'm kind of stunned that any field that takes itself seriously would so thoroughly cling to the nearly medieval notion of the tabula rasa (1) in the face of unrelenting scientific evidence to the contrary. But, if this territory has been abandoned by sociologists (2), it has recently, and enthusiastically, been claimed by physicists (who may or may not recognize the similarity of their work to a certain idea in science fiction).

Ball's background is originally in chemistry and statistical physics, and having spent many years as an editor at Nature, he apparently now has a broad perspective on modern science. But, what makes his writing so enjoyable is the way he places scientific advances in their proper historical context, showing both where the inspiration may have come from, and how other scientists were developing similar or alternative ideas concurrently. These strengths are certainly evident in his article about the statistical regularity of terrorism, but he puts them to greater use in several books and, in particular, one on physicists' efforts to create something he calls sociophysics. As it turns out, however, this connection between physics and sociology is not a new one, and the original inspiration for statistical physics (one of the three revolutionary ideas in modern physics; the other two are quantum mechanics and relativity) is owed to social scientists.

In the mid 1800s, James Clerk Maxwell, one of the fathers of statistical physics, read Henry Thomas Buckle's lengthy History of Civilization. Buckle was a historian by trade, and a champion of the idea that society's machinations are bound by fundamental laws. Maxwell, struggling with the question of how to describe the various motions of particles in a gas, was struck by Buckle's descriptions of the statistical nature of studies of society. Such studies sought not to describe each individual and their choices exactly, but instead represent the patterns of behavior statistically, and often pointed to surprising regularities, e.g., the near-stable birth or suicide rates in a particular region. As a result, Maxwell abandoned the popular approach of describing gas particles only using Newtonian mechanics, i.e., an attempt to describe every particle's position and motion exactly, in favor for a statistical approach that focused on the distribution of velocities.

It was the profound success of these statistical descriptions that helped cement this approach as one of the most valuable tools available to physicists, and brought about some pretty profound shifts in our understanding of gasses, materials and even astrophysics. So, it seems fitting that statistical physicists are now returning to their roots by considering statistical laws of human behavior. Alas, I doubt that most such physicists appreciate this fact.

These efforts, which Ball surveys in "Critical Mass" (Farrar, Straus and Giroux, 2004) via a series of well-written case studies, have dramatically altered our understanding of phenomena as varied as traffic patterns (which have liquid, gaseous, solid and meta-stable states along with the corresponding phase transitions), voting patterns in parliamentary elections (which display nice heavy-tailed statistics), the evolution of pedestrian traffic trails across a university quad, economics and the statistics of businesses and markets, and a very shallow discussion of social networks. Although his exposition is certainly aimed at the layman, he does not shy away from technical language when appropriate. Pleasantly, he even reproduces figures from the original papers when it serves his explanations. Given that these phenomena were drawn from a burgeoning field of interdisciplinary research, it's easy to forgive him for omitting some of my favorite topics, treating others only shallowly, and mercifully leaving out the hobby horses of cellular automata, genetic algorithms and artificial life.

Now, after seeing that list of topics, you might think that "Critical Mass" was a book about complex systems, and you might be right. But, you might be wrong, too, which is the problem when there's no strict definition of a term. So, let's assume he has, and see what this offers in terms of clarifying the corresponding ontological question. For one thing, Ball's choices suggest that perhaps we do not need other ill-defined properties like emergence, self-organization or robustness (3) to define a complex system. Instead, perhaps when we say we are studying a "complex system," we simply mean that it has a highly heterogeneous composition that we seek to explain using statistical mechanisms. To me, the former means that I, because of my limited mental capacity to grasp complicated equations, relationships or a tremendously large configuration space, pretty much have to use a statistical characterization that omits most of the detailed structure of the system; also, I say heterogeneous because homogeneous systems are much easier to explain using traditional statistical mechanics. The latter means that I'm not merely interested in describing the system, which can certainly be done using traditional statistics, but rather in explaining the rules and laws that govern the formation, persistence and evolution of that structure. For me, this definition is attractive both for its operational and utilitarian aspects, but also because it doesn't require me to wave my hands, use obfuscating jargon or otherwise change the subject.

In general, it's the desire to establish laws that reflects complex systems' roots in physics, and it is this that distinguishes it from traditional statistics and machine learning. In those areas, the focus seems to me to be more on predictive power ("Huzzah! My error rate is lower than yours.") and less on mechanisms. My machine learning friends tell me that people are getting more interested in the "interpretability" of their models, but I'm not sure this is the same thing as building models that reflect the true mechanical nature of the underlying system... of course, one fundamental difference between much of statistical learning and what I've described above is that for many systems, there's no underlying mechanism! We shouldn't expect problems like keeping the spam out of my inbox to exhibit nice mechanistic behavior, and there are a tremendous number of such problems out there today. Fortunately, I'm happy to leave those to people who care more about error rates than mechanisms, and I hope they're happy to leave studying the (complex) natural world, mechanisms and all, to me.

Updates, July 7

(1) The notion of the tabula rasa is not antithetical to the idea that there are patterns in social behavior, but patterns per se are not the same as the kind of societal laws that the founders of sociology were apparently interested in, i.e., sociology apparently believes these patterns to be wholly the results of culture and not driven by things that every human shares like our evolutionary history as a species. I suppose there's a middle ground here, in which society has created the appearance of laws, which the sociophysicists then discover and mistake for absolutes. Actually, I'm sure that much of what physicists have done recently can be placed into this category.

(2) It may be the case that it is merely the portion of sociology that my friend is most familiar with that expresses this odd conviction, and that there are subfields that retain the idea that true mechanistic laws do operate in social systems. For all I know, social network analysis people may be of this sort; it would be nice to have an insider's perspective on this.

(3) Like the notions of criticality and universality, these terms actually do have precise, technical definitions in their proper contexts, but they've recently been co-opted in imprecisely ways and are now, unfortunately and in my opinion, basically meaningless in most of the complex systems literature.

posted July 6, 2006 07:09 PM in Reviews | permalink | Comments (0)

December 19, 2005

On modeling the human response time function; Part 3.

Much to my surprise, this morning I awoke to find several emails in my inbox apparently related to my commentary on the Barabasi paper in Nature. This morning, Anders Johansen pointed out to myself and Luis Amaral (I can only assume that he has already communicated this to Barabasi) that in 2004 he published an article entitled Probing human response times in Physica A about the very same topic using the very same data as that of Barabasi's paper. In it, he displays the now familiar heavy-tailed distribution of response times and fits a power law of the form P(t) ~ 1/(t+c) where c is a constant estimated from the data. Asymptotically, this is the same as Barabasi's P(t) ~ 1/t; it differs in the lower tail, i.e., for t < c where it scales more uniformly. As an originating mechanism, he suggests something related to a spin-glass model of human dynamics.

Although Johansen's paper raises other issues, which I'll discuss briefly in a moment, let's step back and think about this controversy from a scientific perspective. There are two slightly different approaches to modeling that are being employed to understand the response-time function of human behavior. The first is a purely "fit-the-data" approach, which is largely what Johansen has done, and certainly what Amaral's group has done. The other, employed by Barabasi, uses enough data analysis to extract some interesting features, posits a mechanism for the origin of those and then sets about connecting the two. The advantage of developing such a mechanistic explanation is that (if done properly) it provides falsifiable hypotheses and can move the discussion past simple data-analysis techniques. The trouble begins, as I've mentioned before, when either a possible mechanistic model is declared to be "correct" before being properly vetted, or when an insufficient amount of data analysis is done before positing a mechanism. This latter kind of trouble allows for a debate over how much support the data really provides to the proposed mechanism, and is exactly the source of the exchange between Barabasi et al. and Stouffer et al.

I tend to agree with the idea implicitly put forward by Stouffer et al.'s comment that Barabasi should have done more thorough data analysis before publishing, or alternatively, been a little more cautious in his claims of the universality of his mechanism. In light of Johansen's paper and Johansen's statement that he and Barabasi spoke at the talk in 2003 where Johansen presented his results, there is now the specter that either previous work was not cited that should have been, or something more egregious happened. While not to say that this aspect of the story isn't an important issue in itself, it is a separate one from the issues regarding the modeling, and it is those with which I am primarily concerned. But, given the high profile of articles published in journals like Nature, this kind of gross error in attribution does little to reassure me that such journals are not aggravating certain systemic problems in the scientific publication system. This will probably be a topic of a later post, if I ever get around to it. But let's get back to the modeling questions.

Seeking to be more physics and less statistics, the ultimate goal of such a study of human behavior should be to understand the mechanism at play, and at least Barabasi did put forward and analyze a plausible suggestion there, even if a) he may not have done enough data analysis to properly support it or his claims of universality, and b) his model assumes some reasonably unrealistic behavior on the part of humans. Indeed, the former is my chief complaint about his paper, and why I am grateful for the Stouffer et al. comment and the ensuing discussion. With regard to the latter, my preference would have been for Barabasi to have discussed the fragility of his model with respect to the particular assumptions he describes. That is, although he assumes it, humans probably don't assign priorities to their tasks with anything like a uniformly random distribution and nor do humans always execute their highest priority task next. For instance, can you decide, right now without thinking, what the most important email in your inbox is at this moment? Instead, he commits the crime of hubris and neglects these details in favor of the suggestiveness of his model given the data. On the other hand, regardless of their implausibility, both of these assumptions about human behavior can be tested through experiments with real people and through numerical simulation. That is, these assumptions become predictions about the world that, if they fail to agree with experiment, would falsify the model. This seems to me an advantage of Barabasi's mechanism over that proposed by Johansen, which, by relying on a spin glass model of human behavior, seems quite trickier to falsify.

But let's get back to the topic of the data analysis and the argument between Stouffer et al. and Barabasi et al. (now also Johansen) over whether the data better supports a log-normal or a power-law distribution. The importance of this point is that if the log-normal is the better fit, then the mathematical model Barabasi proposes cannot be the originating mechanism. From my experience with distributions with heavy tails, it can be difficult to statistically (let alone visually) distinguish between a log-normal and various kinds of power laws. In human systems, there is almost never enough data (read: orders of magnitude) to distinguish these without using standard (but sophisticated) statistical tools. This is because for any finite sample of data from an asymptotic distribution, there will be deviations that will blur the functional form just enough to look rather like the other. For instance, if you look closely at the data of Barabasi or Johansen, there are deviations from the power-law distribution in the far upper tail. Stouffer et al. cite these as examples of the poor fit of the power law and as evidence supporting the log-normal. Unfortunately, they could simply be due to deviations due to finite-sample effects (not to be confused with finite-size effects), and the only way to determine if they could have been is to try resampling the hypothesized distribution and measuring the sample deviation against the observed one.

The approach that I tend to favor for resolving this kind of question combines a goodness-of-fit test with a statistical power test to distinguish between alternative models. It's a bit more labor-intensive than the Bayesian model selection employed by Stouffer et al., but this approach offers, in addition to others that I'll describe momentarily, the advantage of being able to say that, given the data, neither model is good or that both models are good.

Using Monte Carlo simulation and something like the Kolmogorov-Smirnov goodness-of-fit test, you can quantitatively gauge how likely a random sample drawn from your hypothesized function F (which can be derived using maximum likelihood parameter estimation or by something like a least-squares fit; it doesn't matter) will have a deviation from F at least as big as the one observed in the data. By then comparing the deviations with an alternative function G (e.g., a power law versus a log-normal), you get a measure of the power of F over G as an originating model of the data. For heavy-tailed distributions, particularly those with a sample-mean that converges slowly or never at all (as is the case for something like P(t) ~ 1/t), sampling deviations can cause pretty significant problems with model selection, and I suspect that the Bayesian model selection approach is sensitive to these. On the other hand, by incorporating sampling variation into the model selection process itself, one can get an idea of whether it is even possible to select one model over another. If someone were to use this approach to analyze the data of human response times, I suspect that the pure power law would be a poor fit (the data looks too curved for that), but that the power law suggested in Johansen's paper would be largely statistically indistinguishable from a log-normal. With this knowledge in hand, one is then free to posit mechanisms that generate either distribution and then proceed to validate the theory by testing its predictions (e.g., its assumptions).

So, in the end, we may not have gained much in arguing about which heavy-tailed distribution the data likely came from, and instead should consider whether or not an equally plausible mechanism for generating the response-time data could be derived from the standard mechanisms for producing log-normal distributions. If we had such an alternative mechanism, then we could devise some experiments to distinguish between them and perhaps actually settle this question like scientists.

As a closing thought, my interest in this debate is not particularly in its politics. Rather, I think this story suggests some excellent questions about the practice of modeling, the questions a good modeler should ponder on the road to truth, and some of the pot holes strewn about the field of complex systems. It also, unfortunately, provides some anecdotal evidence of some systemic problems with attribution, the scientific publishing industry and the current state of peer-review at high-profile, fast turn-around-time journals.

References for those interested in reading the source material.

A. Johansen, "Probing human response times." Physica A 338 (2004) 286-291.

A.-L. Barabasi, "The origin of bursts and heavy tails in human dynamics." Nature 435 (2005) 207-211.

D. B. Stouffer, R. D. Malmgren and L. A. N. Amaral "Comment on 'The origin of bursts and heavy tails in human dynamics'." e-print (2005).

J.-P. Eckmann, E. Moses and D. Sergi, "Entropy of dialogues creates coherent structures in e-mail traffic." PNAS USA 101 (2004) 14333-14337.

A.-L. Barabasi, K.-I. Goh, A. Vazquez, "Reply to Comment on 'The origin of bursts and heavy tails in human dynamics'." e-print (2005).

posted December 19, 2005 04:32 PM in Scientifically Speaking | permalink | Comments (0)

March 20, 2005

Review: Hackers and Painters

I recently finished reading the book "Hackers and Painters" by Paul Graham. Graham is notable for having cashed out of the dot-com era when he sold his company ViaWeb to Yahoo!. ViaWeb built online store software that apparently now runs the nearly ubiquitous Yahoo! Store used by some hundreds of thousands of small businesses. Hackers and Painters is basically a collection of only loosely related essays that Graham has written over the past years, many of which appear in their entirety on his website. An interesting biographical fact about Graham are that he holds graduate degrees in both computer science and fine art. This dual perspective is the basis for the title of his book and for a couple of the essays in which he tries to draw similarities between the skills and dedication required to be a good hacker or a good painter. Ultimately, it's a symmetry that I am still not convinced about, despite lots of nice pictures of Renaissance art.

Generally, I would put Hackers and Painters in a similar category to Malcolm Gladwell's "The Tipping Point", or his more recent book "Blink" (which I am refusing to read on account of it's premise being complete bullshit). Basically, these books are written so as to paint an overly simplistic view of the way complex systems in the world work. Graham is not at his best when he's spouting capitalist propaganda about how to get rich but rather when he writes from his own formative experiences, such as his explanation of why nerds are unpopular, or how ViaWeb became more nimble than its competition. Just like Gladwell, Graham's writing style is seductively reductionistic, and brims with short, apparently explanatory anecdotes combined with shallow and conventional-wisdom-style broad generalizations, but actually conceal a host of unspoken assumptions about both the world and the reader. For that, I disliked Graham's conclusions quite a bit, although there are times when he does make interesting and potentially profound observations, such as about the importance of being aware of how language itself can subtly guide cultural evolution.

Here's a short list of silly generalizations one can glean from the text. There are many more reasonable and probably accurate generalizations, along with a wealth of information about how Graham built ViaWeb into a strong business. I omit those nuggets of wisdom because this is a bad review (if you're curious, peruse Graham's list of online essays).

- People who are not rich are that way because they have chosen not to work as hard as those who are.

- Economic inequality is a sign that some people are working a lot harder than everyone else (which is a good thing).

- Being successful in the software business has everything to do with choosing a superior programming language (e.g., LISP).

- Hackers should be allowed to run the world, because they're smarter, work harder and have great ideas.

- Academics should not design programming languages. Unless, that is, they design good ones (e.g., LISP).

- What makes a language popular is its power. except, of course, when it comes to LISP, which is not popular because of politics and pointy-haired middle managers.

- Java is a bad language, and is only popular because of politics and pointy-haired middle manages.

Generally, I wasn't impressed by the book, although there were some very enjoyable sections of it. Had it been published 4-5 years earlier, when the dot-com culture was still freshly interesting, it would be been more interesting since many of his essays focus around the lessons he learned from running a successful start-up company. However, when I finished the book, I was still left wondering about the subtitle - what exactly where the Big Ideas from the Computer Age? Perhaps something about the value of smart, independent people working hard in small groups, and the romantic notion that these groups of people can and will change the world. In this case, he's concerned about these group of people changing the world with computers, but honestly, hasn't it always been those groups of people who change the world, regardless of what tools they use?

"Hackers and Painters: Big Ideas from the Computer Age", by Paul Graham.

ISBN: 0596006624, $15.61 at Amazon.com

Hardcover: 271 pages (hardcover), published by O'Reilly (May, 2004)

posted March 20, 2005 05:07 PM in Reviews | permalink | Comments (0)