February 21, 2005
Global patterns in terrorism; follow-up
Looks like my article with Maxwell Young is picking up some more steam. Phillip Ball, a science writer who often writes for the Nature Publishing Group, authored a very nice little piece which draws heavily on our results. You can read the piece itself here. It's listed under the "muse@nature" section, and I'm not quite sure what that means, but the article is nicely thought-provoking. Here's an excerpt from it:
And the power-law relationship implies that the biggest terrorist attacks are not 'outliers': one-off events somehow different from the all-too-familiar suicide bombings that kill or maim just a few people. Instead, it suggests that they are somehow driven by the same underlying mechanism.
Similar power-law relationships between size and frequency apply to other phenomena, such as earthquakes and fluctuations in economic markets. This indicates that even the biggest, most infrequent earthquakes are created by the same processes that produce hordes of tiny ones, and that occasional market crashes are generated by the same internal dynamics of the marketplace that produce daily wobbles in stock prices. Analogously, Clauset and Young's study implies some kind of 'global dynamics' of terrorism.
Moreover, a power-law suggests something about that mechanism. If every terrorist attack were instigated independently of every other, their size-frequency relationship should obey the 'gaussian' statistics seen in coin-tossing experiments. In gaussian statistics, very big fluctuations are extremely rare - you will hardly ever observe ten heads and ninety tails when you toss a coin 100 times. Processes governed by power-law statistics, in contrast, seem to be interdependent. This makes them far more prone to big events, which is why giant tsunamis and market crashes do happen within a typical lifetime. Does this mean that terrorist attacks are interdependent in the same way?
Here's a bunch of other places that have picked it up or are discussing it; the sites with original coverage about our research are listed first, while the rest are (mostly) just mirroring other sites:
(February 10) Physics Web (news site [original])
(February 18) Nature News (news site [original])
(March 2) World Science (news site [original])
(March 5) Watching America (news? (French) (Google cache) [original])
(March 5) Brookings Institute (think tank [original])
(March 19) Die Welt (one of the big three German daily newspapers (translation) [original] (in German))
3 Quarks Daily (blog, this is where friend Cosma Shalizi originally found the Nature pointer)
Science Forum (a lovely discussion)
The Anomalist (blog, Feb 18 news)
Science at ORF.at (news blog (in German))
Wissenschaft-online (news (in German))
Economics Roundtable (blog)
Physics Forum (discussion)
Spektrum Direkt (news (in German))
Manila Times (Philippines news)
Rantburg (blog, good discussion in the comments)
Unknown (blog (in Korean))
Neutrino Unbound (reading list)
Discarded Lies (blog)
Global Guerrilas (nice blog by John Robb who references our work)
NewsTrove (news blog/archive)
The Green Man (blog, with some commentary)
Sapere (newspaper (in Italian)
Almanacco della Scienza (news? (in Italian))
Logical Meme (conservative blog)
Money Science (discussion forum (empty))
Always On (blog with discussion)
Citebase ePrint (citation information)
Crumb Trail (blog, thoughtful coverage)
LookSmart's Furl (news mirror?)
A Dog That Can Read Physics Papers (blog (in Japanese))
Tyranny Response Unit News (blog)
Chiasm Blog (blog)
Focosi Politics (blog?)
vsevcosmos LJ (blog)
larnin carves the blogck (blog (German?))
Brothers Judd (blog)
Gerbert (news? (French))
Dr. Frantisek Slanina (homepage, Czech)
Ryszard Benedykt (some sort of report/paper (Polish))
Mohammad Khorrami (translation of Physics Web story (Persian))
Feedz.com (new blog)
The Daily Grail (blog)
Physics Forum (message board)
Tempers Ball (message board)
A lot of these places reference each other. Seems like most of them are getting their information from either our arxiv posting, the PhysicsWeb story, or now the Nature story. I'll keep updating this list, as more places pick it up.
Update: It's a little depressing to me that the conservatives seem to be latching onto the doom-and-gloom elements of our paper as being justification for the ill-named War on Terrorism.
Update: A short while ago, we were contacted by a reporter for Die Welt, one of the three big German daily newspapers, who wanted to do a story on our research. If the story ever appears online, I'll post a link to it.
February 20, 2005
On the currency of ideas
Scarce resources. It's one of those things that you know is really important for all sorts of other stuff, but most of the time feels like a distant problem for you, or maybe your kids, to deal with. Sure, everyone agrees that material stuff can be scarce. I mean, there's never enough time in the day, or parking spaces, or money. Those are scarcities that most people worry about, right? But who ever worries about a shortage of ideas?
As is often the case when I'm driving somewhere, I found myself musing tonight about the recent bruhaha over software patents in Europe in which certain industries are trying very hard to make ideas as ownable as shoes. As preposterous as this idea sounds (after all, if I lend you my pair of shoes, then you have them and I don't; whereas, if I tell you my latest brilliant idea, we both have it), that's what several very wealthy industries believe is the key to their continued profitability. Why do they believe this? For two reasons, basically. On the one hand, they believe that being able to own an idea will protect their investment in the development of said idea by suing the pants off of anyone else who tries to do something similar. On the other hand, if ideas are like shoes, then they can be bought and sold (read: for profit) just like any other commodity. Pharmaceutical companies patent chemical structures, online companies patent user interfaces, and everyone wants a piece of the intellectual property pie before the last piece gets eaten. If these people are successful at redefining what it means to own something, won't the future be full of people being arrested for saying the wrong things, or even thinking the wrong thoughts? Shades of Monsieur Orwell linger darkly these days.
This is all rather abstract, and the drama over software patents will probably play out without a care about little folks like me. But a cousin of this demon is lurking much closer to home. When there are more people than good ideas floating around, ideas become a scarce resource just like anything else. I've had the same conversation a half-dozen times with different people recently about how fast-paced my field is. Why is it this way? Well, sure, there's a lot really great stuff being done by a lot of great people. But then, the same is true in fields like quantum gravity and econophysics. I suspect that part of what really makes my field move is that people are scared of being scooped. And although it hasn't happened to me yet, I fear it just as much as everyone else. And so, everyone in the race spends a few more hours hunched over the computer, a few more days feverishly crafting an old idea into a finished project, and spends fewer moments admiring the sky, and fewer thoughts on the people they love.
In academia, ideas are already property. Ideas are owned when someone published them. But the competition doesn't stop there. Then comes the endless self-promotion of your work in an effort to convince other people that your idea is a good idea. In the end, an idea is yours only when other people will argue that it's not theirs. The entire system is founded on the premise that, if your idea is truly great, then in the end everyone will acknowledge that it's fabulous and that you're that much cooler for having come up with it. While not exactly a system that encourages healthy lifestyles, especially for women, it has enough merits to outweigh the failings, and its a lot better than any alternative resembling software patents. The danger comes from the combination of a lag-time between coming up with an idea and publishing it, and when there are more researchers than ideas. When these are true, you get the fever pitch mental race to see who gets to make the first splash, and who gets water up the nose. I've never liked the sting of a nasal enema of chlorine or salt, and I have at least two projects where this is a fairly serious concern.
When a group decides that an idea is owned by a person, it's an inherently social exchange and can never become a financial exchange without micro-policing that would put any totalitarian to shame. Hypothetically, if ideas were locked up by law, would I have to pay you when you tell me your idea? Not with money yet, but right now, I do still pay you. Instead of financial capital, I pay you with social capital: with respect, recognition, and reference. When I, in turn, tell my friend about your idea, I tell them that it's yours. If it's a good idea, then we both associate its goodness with you. This is the heart of the exchange, you give us your idea, and we give you back another idea. Normally, this is the best we can do, but modern technology has given us a new way to pay each other for ideas: hyperlinks. When I link to other articles and pages in my posts, I am tithing their owners and authors. After all, hyperlinks (roughly) are how Google decides who owns what. That is, topology becomes a proxy for wealth.
In the social world, reputation and gossip are the currency of exchange. But in the digital world, hyperlinks are the currency of information and ideas. So, who wants some free money?
p.s. Blogs are an intersection of the social world and the digital world. If bloggers and other sites exchange currency in the form of hyperlinks, what do readers and bloggers exchange? Comments. Comments are the currency that readers use to pay their authors for their posts.
February 15, 2005
End of the Enlightenment; follow-up
I keep stumbling across great pieces on rational thought, scientific inquiry and then also, unfortunately, stuff about Intelligent Design'ers. A quick round-up of several good ones.
Here is a fantastic piece written by Anthropology professor James Lett about how to test evidence of some claim, featured on CSICOP, which aptly stands for Committee for the Scientific Investigation of Claims of the Paranormal. A brief excerpt:
The rule of falsifiability is essential for this reason: If nothing conceivable could ever disprove the claim, then the evidence that does exist would not matter; it would be pointless to even examine the evidence, because the conclusion is already known -- the claim is invulnerable to any possible evidence. This would not mean, however, that the claim is true; instead it would mean that the claim is meaningless.
Data and theory. Evidence and mechanism. These are the twin pillars of sound science. Without data and evidence, there is nothing for a theory or mechanism to explain. Without a theory and mechanism, data and evidence drift aimlessly on a boundless sea.
Finally, the Global Consciousness Project (amazingly, run out of Princeton) is the massive exercise in rabbit chasing that I mentioned in the previous post. An sound critique of their claims is made by Claus Larsen after he went to a talk by Dean Radin of GCP.
Another serious problem with the September 11 result was that during the days before the attacks, there were several instances of the eggs picking up data that showed the same fluctuation as on September 11th. When I asked Radin what had happened on those days, the answer was:
"I don't know."
I then asked him - and I'll admit that I was a bit flabbergasted - why on earth he hadn't gone back to see if similar "global events" had happened there since he got the same fluctuations. He answered that it would be "shoe-horning" - fitting the data to the result.
Checking your hypothesis against seemingly contradictory data is "shoe-horning"?
For once, I was speechless.
Finally, in subsequent conversations with Leigh, I made an observation that's worth repeating. The Church was having it out with natural philosophers (i.e., proto-natural scientists) about whether the earth was flat, or if the sun went 'round the earth as long ago as c.1500 if you count from Copernicus (c.1600 if you count from Galileo). A rough estimation on the length of that battle is 300 years before the general public agreed with the brave gentlemen who stood against ignorance. Charles Darwin kicked off the battle over whether biological complexity requires design or not, that is, the "debate" over evolution, a little over 150 years ago. So, if history repeats itself (which it inevitably does), then we have a long way to go before this fight is over. As a side note, Darwin's birthday was February 12th.
February 12, 2005
End of the Enlightenment
The Enlightenment was a grand party of rationalism, lasting a brief, yet highly productive 300 years. A mere blip in the multi-millennial history of humans. Alas, the wine was good and the fireworks spectacular. Now, the candle is going out, and we are returning to the comfort of darkness, where life isn't so complicated and the unknown more understandable.
"I worry that, especially as the Millennium edges nearer, pseudoscience and superstition will seem year by year more tempting, the siren song of unreason more sonorous and attractive. Where have we heard it before? Whenever our ethnic or national prejudices are aroused, in times of scarcity, during challenges to national self-esteem or nerve, when we agonize about our diminished cosmic place and purpose, or when fanaticism is bubbling up around us-then, habits of thought familiar from ages past reach for the controls." -- Carl Sagan, The Demon-Haunted World: Science As a Candle in the Dark
Primary among the tenants of the Enlightenment was the belief that the world is fundamentally rational, a belief that stood in stark contrast to the dogma of the necessity of divinity for action and of the existence of the supernatural. With rationality, however, God was no long needed to guide an apple from the tree to the ground. With rationality, something odd happened: science became predictive, whereas before it had only been descriptive. Religion (in its many forms) remains the latter.
"I maintain there is much more wonder in science than in pseudoscience. And in addition, to whatever measure this term has any meaning, science has the additional virtue, and it is not an inconsiderable one, of being true." -- Carl Sagan
Before the Enlightenment, people turned to those who had the ear of God for information about the future. But with the emergence of rational thought and its heir scientific inquiry, prediction became the providence of Man. Although George W Bush may claim that it is freedom, I claim that it is instead science that is the most fundamental democratizing force in the world. Science, not freedom, gives both the aristocrat and peasant access to Truth.
"Many statements about God are confidently made by theologians on grounds that today at least sound specious. Thomas Aquinas claimed to prove that God cannot make another God, or commit suicide, or make a man without a soul, or even make a triangle whose interior angles do not equal 180 degrees. But Bolyai and Lobachevsky were able to accomplish this last feat (on a curved surface) in the nineteenth century, and they were not even approximately gods." -- Carl Sagan, Broca's Brain
However sensationalist it may be to claim that the we are in the twilight of the Enlightenment, especially considering the wonders of modern science, there is disturbing evidence that a cultural backlash against rationality is happening. Consider the Bush administration's abuse of science for political ends, including recent revelation that at the U.S. Fish and Wildlife Service's scientists have been instructed to alter their scientific findings for political and pro-business reasons. Scientists apparently self-censor themselves for fear of political repercussions. And with the faux debate over intelligent design (simply the new version of creationism, although equally inane) apparently persuading many teachers to avoid teaching evolution at all, it seems clear that something significant is happening. (For many excellent critiques of intelligent design, and a fantastic discussion of how evolution is supported by a burgeoning amount of evidence, see Carl Zimmer's imminently readable blog.)
"For most of human history we have searched for our place in the cosmos. Who are we? What are we? We find that we inhabit an insignificant planet of a hum-drum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people. We make our world significant by the courage of our questions, and by the depth of our answers." -- Carl Sagan
During the 300 years of Enlightenment, science has steadily pushed back the darkness and revealed that the extent of the material world which we know is completely mechanistic. Although the details may seem arcane or magical to most, the power of this world-view is affirmed by the general public's acceptance of fruits of science, i.e., technology, as basic, even essential, components of their life.
"If you want to save your child from polio, you can pray or you can inoculate... Try science." -- Carl Sagan, The Demon-Haunted World
Why then, is there such a violent reaction among so many about topics like evolution or the possibility of life beyond Earth? If rational thought has been so much more successful than any of the alternatives, why then, do people persist in believing in psychic powers and the idea that the Earth was created in 168 hours? Why will people accept that electrons follow the laws of physics in flowing through the transistors which are critical to displaying this text, yet people will not accept the mountain of evidence supporting evolution?
"If we long to believe that the stars rise and set for us, that we are the reason there is a Universe, does science do us a disservice in deflating our conceits?... For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." -- Carl Sagan, The Demon-Haunted World
It seems reasonable to me that the answer to these questions is that human beings are fundamentally irrational beings and that rational thought is not a natural mode of thought for us. Science is has never been popular because its study is frustrating, slow and confusing. It often involves math or memorization, and it always involves discipline and persistence. These things do not come easily to most people. It is easier to be utilitarian and dogmatic, than it is to be skeptical and careful. Combined with godly powers of rationalization (a topic about which I will blog soon) and a fundamental laziness of mind (which can only be circumvented by careful training and perpetual vigilance), it seems somewhat surprising that the Enlightenment ever happened in the first place.
"Think of how many religions attempt to validate themselves with prophecy. Think of how many people rely on these prophecies, however vague, however unfulfilled, to support or prop up their beliefs. Yet has there ever been a religion with the prophetic accuracy and reliability of science?" -- Carl Sagan, The Demon-Haunted World
But since it did happen, the least we can do is enjoy the candle that burns so brightly now, even as the darkness advances menacingly. We can only hope (an irrational and truly human feeling) that the Enlightenment is more resilient than the darkness is persistent.
"Our species needs, and deserves, a citizenry with minds wide awake and a basic understanding of how the world works." -- Carl Sagan, The Demon-Haunted World
Update: The Global Consciousness Project is a prime example of supposedly rational people being very irrational. In it, normally respectable scientists monitor the fluctuations of random number generators in an effort to measure global psychic events. This recent story about it sounds persuasive, but the scientists involved are making a fundamental mistake of agency. Sure, there are some unexplained correlations between the random number generators, but there are also significant correlations between the first letter of your name and your life span. The true question is whether or not there is a causative relationship between the first letter and your life span. Similarly, these scientists' notion of the causative mechanism between the fluctuations in the random number generators and apparent "global" events is simply a self-fulfilling hypothesis - you can also explain away the failures and highlight the successes. It's called the investigator's bias, and it's a well documented state of irrationality that seems quite rational to the beholder.
p.s. Thank you to my friend Leigh Fanning for provoking the thoughts that led to this entry over dinner last night at Vivace's.
February 09, 2005
Global patterns in terrorism
Although the severity of terrorist attacks may seem to be either random or highly planned in nature, it turns out that in the long-run, it is neither. By studying the set of all terrorist attacks worldwide between 1968 and 2004, we show that a simple mathematical rule, a power law with exponent close to two, governs the frequency and severity of attacks. Thus, if history is any basis to predict the future, we can predict with some confidence how long it will be before the next catastrophic attack will occur somewhere in the world.
In joint work with Max Young, we've discovered the appearance of a surprising global pattern in terrorism over the past 37 years. The brief write up of our findings is on arXiv.org, and can be found here.
Update: PhysicsWeb has done a brief story covering this work as well. The story is fairly reasonable, although the writer omitted a statement I made about caution with respect to this kind of work. So, here it is:
Generally, one should be cautious when applying the tools of one field (e.g., physics) to make statements in another (e.g., political science) as in this case. The results here turned out to be quite nice, but in approaching similar questions in the future, we will continue to exercise that caution.
February 06, 2005
Culture and the politics of marriage
Marriage is supposed to be a celebration of life and commitment. It is supposed to be a time when two people decide to share their lives in the most intimate of ways - becoming a single taxable entity. Although there are certainly a great many incentives for marriage, e.g., studies often show it's a long-term investment in one's health, there are other advantages to remaining single, like greater independence and flexibility in picking out furniture.
Yet despite the fundamental importance of the family unit to the continuation of the human race, the trend is for Western young adults to stay "single" longer and marry later, particularly when they are well-educated or perhaps part of the "creative class". I suspect another positive correlation with a lengthier single-hood: living in a dense urban area, such as New York City or San Francisco provides such a vast array of opportunities (both social and career-wise), that we young folk are reluctant to close off any of those possibilities. Even with popular shows like Sex and the City moralizing about the benefits of marriage, the primary audience of the show don't seem to be imitating it in that particular aspect.
This fact is one more reason to suggest that the conservative movement to promote the misnamed "family values" (which is largely a religious cover for homophobia and bigotry) is fundamentally out of step with the culture as a whole. In fact, it betrays the movement's fundamental hypocrisy with regard to modern culture. This is best illustrated by, on the one hand, the monotonous promotion of "marriage" as a cure for all cultural ills, and on the other hand, the denial of that very institution (and all of the benefits that go along with it) to gays. Taking a slightly broader perspective, the movement's condemnation of social liberalism (e.g., tolerance of homosexuality, etc.), contrasted with its lustful relationship with corporate liberalism, is inherently two-faced.
Is it possible for politics to not sully itself with the commingling of self-serving power and the craven materialism? Apparently not.
February 03, 2005
Our ignorance of intelligence
A recent article in the New York Times, which is itself a review of a review article that recently appeared in Nature Neuroscience Reviews by the oddly named Avian Brain Nomenclature Consortium, about the incredible intelligence of certain bird species has prompted me to dump some thoughts about the abstract quality of intelligence, and more importantly, where it comes from. Having also recently finished reading On Intelligence by Jeff Hawkins (yes, that one), I've returned to my once and future fascination with that ephemeral and elusive quality that is "intelligence". We'll return to that shortly, but first let's hear some amazing things, from the NYTimes article, about what smart birds can do.
"Magpies, at an earlier age than any other creature tested, develop an understanding of the fact that when an object disappears behind a curtain, it has not vanished.
At a university campus in Japan, carrion crows line up patiently at the curb waiting for a traffic light to turn red. When cars stop, they hop into the crosswalk, place walnuts from nearby trees onto the road and hop back to the curb. After the light changes and cars run over the nuts, the crows wait until it is safe and hop back out for the food.
Pigeons can memorize up to 725 different visual patterns, and are capable of what looks like deception. Pigeons will pretend to have found a food source, lead other birds to it and then sneak back to the true source.
Parrots, some researchers report, can converse with humans, invent syntax and teach other parrots what they know. Researchers have claimed that Alex, an African gray, can grasp important aspects of number, color concepts, the difference between presence and absence, and physical properties of objects like their shapes and materials. He can sound out letters the same way a child does."
Amazing. What is even more surprising is that the structure of the avian brain is not like the mammalian brain at all. In mammals (and especially so in humans), the so-called lower regions of the brain have been enveloped by a thin sheet of cortical cells called the neo-cortex. This sheet is the base of human intelligence and is incredibly plastic. Further, it's assumed most of the control for many basic functions like breathing and hunger. The neocortex's pre-eminence is what allows people to consciously starve themselves to death. Arguably, it's the seat of free will (which I will blog about on a later date).
So how is it that birds, without a neocortex, can be so intelligent? Apparently, they have evolved an set of neurological clusters that are functionally equivalent to the mammal's neocortex, and this allow them to learn and predict complex phenomena. The equivalence is an important point in support of the belief that intelligence is independent of the substrate on which it is based; here, we mean specifically the types of supporting structures, but this independence is a founding principle of the dream of artificial intelligence (which is itself a bit of a misnomer). If there is more than one way that brains can create intelligent behavior, it is reasonable to wonder if there is more than one kind of substance from which to build those intelligent structures, e.g., transitors and other silicon parts.
It is this idea of independence that lies at the heart of Hawkins' "On Intelligence", in which he discusses his dream of eventually understanding the algorithm that runs on top of the neurological structures in the neocortex. Once we understand that algorithm, he dreams that humans will coexist with and cultivate a new species of intelligent machines that never get cranky, never have to sleep and can take care of mundanities like driving humans around, and crunching through data. Certainly a seductive and utopian future, quite unlike the uninterestingly, technophobic, distopian futures that Hollywood dreams up (at some point, I'll blog about popular culture's obsession with technophobia and its connection with the ancient fear of the unknown).
But can we reasonably expect that the engine of science, which has certainly made some astonishing advances in recent years, will eventually unravel the secret of intelligence? Occasionally, my less scientifically-minded friends have asked me to make my prediction on this topic (see previous reference to the fear-of-the-unknown). My response is, and will continue to be, that "intelligence" is, first of all, a completely ill-defined term as whenever we make machines do something surprisingly clever, critics just change the definition of intelligence. But excepting that slipperiness, I do not think we will realize Hawkins' dream of intelligent machines within my lifetime, and perhaps not within my children's either. What the human brain does is phenomenally complicated, and we are just now beginning to understand its most basic functions, let alone understand how they interact or even how they adapt over time. Combined with the complicated relationship between genetics and brain-structure (another interesting question: how does the genome store the algorithms that allow the brain to learn?), it seems like the quest of understanding human intelligence will keep many scientists employed for many many years. That all being said, I would love to be proved wrong.
Computer: tea; Earl Grey; hot.
Update 3 October 2012: In the news today is a new study at PNAS on precisely this topic, by Dugas-Ford, Rowell, and Ragsdale, "Cell-type homologies and the origins of the neocortex." The authors use a clever molecular marker approach to show that the cells that become the neocortex in mammals form different, but identifiable structures in birds and lizards, with all three neural structures performing similar neurological functions. That is, they found convergent evolution in the functional behavior of different neurological architectures in these three groups of species. What seems so exciting about this discovery is that having multiple solutions to the same basic problem should help us identify the underlying symmetries that form the basis for intelligent behavior.