September 09, 2011

What is the probability of a 9/11-size terrorist attack?

Sunday is the 10-year anniversary of the 9/11 terrorist attacks. As a commemoration of the day, I'm going to investigate answers to a very simple question: what is the probability of a 9/11-size or larger terrorist attack?

There are many ways we could try to answer this question. Most of them don't involve using data, math and computers (my favorite tools), so we will ignore those. Even using quantitative tools, approaches differ based on how strong are the assumptions they make about the social and political processes that generate terrorist attacks. We'll come back to this point throughout the analysis.

Before doing anything new, it's worth repeating something old. For the better part of the past 8 years that I've been studying the large-scale patterns and dynamics of global terrorism (see for instance, here and here), I've emphasized the importance of taking an objective approach to the topic. Terrorist attacks may seem inherently random or capricious or even strategic, but the empirical evidence demonstrates that there are patterns and that these patterns can be understood scientifically. Earthquakes and seismology serves as an illustrative example. Earthquakes are extremely difficult to predict, that is, to say beforehand when, where and how big they will be. And yet, plate tectonics and geophysics tells us a great deal about where and why they happen and the famous Gutenberg-Richter law tells us roughly how often quakes of different sizes occur. That is, we're quite good at estimating the long-time frequencies of earthquakes because larger scales allows us to leverage a lot of empirical and geological data. The cost is that we lose the ability to make specific statements about individual earthquakes, but the advantage is insight into the fundamental patterns and processes.

The same can be done for terrorism. There's now a rich and extensive modern record of terrorist attacks worldwide [1], and there's no reason we can't mine this data for interesting observations about global patterns in the frequencies and severities of terrorist attacks. This is where I started back in 2003 and 2004, when Maxwell Young and I started digging around in global terrorism data. Catastrophic events like 9/11, which (officially) killed 2749 people in New York City, might seem so utterly unique that they must be one-off events. In their particulars, this is almost surely true. But, when we look at how often events of different sizes (number of fatalities) occur in the historical record of 13,407 deadly events worldwide [2], we see something remarkable: their relative frequencies follow a very simple pattern.

The figure shows the fraction of events that killed at least x individuals, where I've divided them into "severe" attacks (10 or more fatalities) and "normal" attacks (less than 10 fatalities). The lions share (92.4%) of these events are of the "normal" type, killing less than 10 individuals, but 7.6% are "severe", killing 10 or more. Long-time readers have likely heard this story before and know where it's going. The solid line on the figure shows the best-fitting power-law distribution for these data [3]. What's remarkable is that 9/11 is very close to the curve, suggesting that statistically speaking, it is not an outlier at all.

A first estimate: In 2009, the Department of Defense received the results of a commissioned report on "rare events", with a particular emphasis on large terrorist attacks. In section 3, the report walks us through a simple calculation of the probability of a 9/11-sized attack or larger, based on my power-law model. It concludes that there was a 23% chance of an event that killed 2749 or more between 1968 and 2006. [4] The most notable thing about this calculation is that its magnitude makes it clear that 9/11 should not be considered a statistical outlier on the basis of its severity.

How we can we do better: Although probably in the right ballpark, the DoD estimate makes several strong assumptions. First, it assumes that the power-law model holds over the entire range of severities (that is x>0). Second, it assumes that the model I published in 2005 is perfectly accurate, meaning both the parameter estimates and the functional form. Third, it assumes that events are generated independently by a stationary process, meaning that the production rate of events over time has not changed nor has the underlying social or political processes that determine the frequency or severity of events. We can improve our estimates by improving on these assumptions.

A second estimate: The first assumption is the easiest to fix. Empirically, 7.6% of events are "severe", killing at least 10 people. But, the power-law model assumed by the DoD report predicts that only 4.2% of events are severe. This means that the DoD model is underestimating the probability of a 9/11-sized event, that is, the 23% estimate is too low. We can correct this difference by using a piecewise model: with probability 0.076 we generate a "severe" event whose size is given by a power-law that starts at x=10; otherwise we generate a "normal" event by choosing a severity from the empirical distribution for 0 < x < 10 . [5] Walking through the same calculations as before, this yields an improved estimate of a 32.6% chance of a 9/11-sized or larger event between 1968-2008.

A third estimate: The second assumption is also not hard to improve on. Because our power-law model is estimated from finite empirical data, we cannot know the alpha parameter perfectly. Our uncertainty in alpha should propagate through to our estimate of the probability of catastrophic events. A simple way to capture this uncertainty is to use a computational bootstrap resampling procedure to generate many synthetic data sets like our empirical one. Estimating the alpha parameter for each of these yields an ensemble of models that represents our uncertainty in the model specification that comes from the empirical data.

This figure overlays 1000 of these bootstrap models, showing that they do make slightly different estimates of the probability of 9/11-sized events or larger. As a sanity check, we find that the mean of these bootstrap parameters is alpha=2.397 with a standard deviation of 0.043 (quite close to the 2.4+/-0.1 value I published in 2009 [6]). Continuing with the simulation approach, we can numerically estimate the probability of a 9/11-sized or larger event by drawing synthetic data sets from the models in the ensemble and then asking what fraction of those events are 9/11-sized or larger. Using 10,000 repetitions yields an improved estimate of 40.3%.

Some perspective: Having now gone through three calculations, it's notable that the probability of a 9/11-sized or larger event has almost doubled as we've improved our estimates. There are still additional improvements we could do, however, and these might push the number back down. For instance, although the power-law model is a statistically plausible model of the frequency-severity data, it's not the only such model. Alternatives like the stretched exponential or the log-normal decay faster than the power law, and if we were to add them to the ensemble of models in our simulation, they would likely yield 9/11-sized or larger events with lower frequencies and thus likely pull the probability estimate down somewhat. [7]

Peering into the future: Showing that catastrophic terrorist attacks like 9/11 are not in fact statistical outliers given the sheer magnitude and diversity of terrorist attacks worldwide over the past 40 years is all well and good, you say. But, what about the future? In principle, these same models could be easily used to make such an estimate. The critical piece of information for doing so, however, is a clear estimate of the trend in the number of events each year. The larger that number, the greater the risk under these models of severe events. That is, under a fixed model like this, the probability of catastrophic events is directly related to the overall level of terrorism worldwide. Let's look at the data.

Do you see a trend here? It's difficult to say, especially with the changing nature of the conflicts in Iraq and Afghanistan, where many of the terrorist attacks of the past 8 years have been concentrated. It seems unlikely, however, that we will return to the 2001 levels (200-400 events per year; the optimist's scenario). A dire forecast would have the level continue to increase toward a scary 10,000 events per year. A more conservative forecast, however, would have the rate continue as-is relative to 2007 (the last full year for which I have data), or maybe even decrease to roughly 1000 events per year. Using our estimates from above, 1000 events overall would generate about 75 "severe" events (more than 10 fatalities) per year. Plugging this number into our computational model above (third estimate approach), we get an estimate of roughly a 3% chance of a 9/11-sized or larger attack each year, or about a 30% chance over the next decade. Not a certainty by any means, but significantly greater than is comfortable. Notably, this probability is in the same ballpark for our estimates for the past 40 years, which goes to show that the overall level of terrorism worldwide has increased dramatically during those decades.

It bears repeating that this forecast is only as good as the models on which it is based, and there are many things we still don't know about the underlying social and political processes that generate events at the global scale. (In contrast to the models the National Hurricane Center uses to make hurricane track forecasts.) Our estimates for terrorism all assume a homogeneous and stationary process where event severities are independent random variables, but we would be foolish to believe that these assumptions are true in the strong sense. Technology, culture, international relations, democratic movements, urban planning, national security, etc. are all poorly understood and highly non-stationary processes that could change the underlying dynamics in the future, making our historical models less reliable than we would like. So, take these estimates for what they are, calculations and computations using reasonable but potentially wrong assumptions based on the best historical data and statistical models currently available. In that sense, it's remarkable that these models do as well as they do in making fairly accurate long-term probabilistic estimates, and it seems entirely reasonable to believe that better estimates can be had with better, more detailed models and data.

Update 9 Sept. 2011: In related news, there's a piece in the Boston Globe (free registration required) about the impact 9/11 had on what questions scientists investigate that discusses some of my work.


[1] Estimates differ between databases, but the number of domestic or international terrorist attacks worldwide between 1968 and 2011 is somewhere in the vicinity of 50,000-100,000.

[2] The historical record here is my copy of the National Memorial Institute for the Prevention of Terrorism (MIPT) Terrorism Knowledge Base, which stores detailed information on 36,018 terrorist attacks worldwide from 1968 to 2008. Sadly, the Department of Homeland Security pulled the plug on the MIPT data collection effort a few years ago. The best remaining data collection effort is the one run by the University of Maryland's National Consortium for the Study of Terrorism and Response to Terrorism (START) program.

[3] For newer readers: a power-law distribution is a funny kind of probability distribution function. Power laws pop up all over the place in complex social and biological systems. If you'd like an example of how weird power-law distributed quantities can be, I highly recommend Clive Crook's 2006 piece in The Atlantic title "The Height of Inequality" in which he considers what the world would look like if human height were distributed as unequally as human wealth (a quantity that is very roughly power-law-like).

[4] If you're curious, here's how they did it. First, they took the power-law model and the parameter value I estimated (alpha=2.38) and computed the model's complementary cumulative distribution function. The "ccdf" tells you the probability of observing an event at least as large as x, for any choice of x. Plugging in x=2749 yields p=0.0000282. This gives the probability of any single event being 9/11-sized or larger. The report was using an older, smaller data set with N=9101 deadly events worldwide. The expected number of these events 9/11-sized or larger is then p*N=0.257. Finally, if events are independent then the probability that we observe at least one event 9/11-sized or larger in N trials is 1-exp(-p*N)=0.226. Thus, about a 23% chance.

[5] This changes the calculations only slightly. Using alpha=2.4 (the estimate I published in 2009), given that a "severe" event happens, the probability that it is at least as large as 9/11 is p=0.00038473 and there were only N=1024 of them from 1968-2008. Note that the probability is about a factor of 10 larger than the DoD estimate while the number of "severe" events is about a factor of 10 smaller, which implies that we should get a probability estimate close to theirs.

[6] In "Power-law distributions in empirical data," SIAM Review 51(4), 661-703 (2009), with Cosma Shalizi and Mark Newman.

[7] This improvement is mildly non-trivial, so perhaps too much effort for an already long-winded blog entry.

posted September 9, 2011 02:17 PM in Terrorism | permalink | Comments (1)

February 13, 2011

Proximate vs. Ultimate Causes

Jon Wilkins, a former colleague of mine at the Santa Fe Institute, has discovered a new talent: writing web comics. [1] Using the clever comic-strip generator framework provided by Strip Generator, he's begun producing comics that cleverly explain important ideas in evolutionary biology. In his latest comic, Jon explains the difference between proximate and ultimate causes, a distinction the US media seems unaware of, as evidenced by their irritating fawning over the role of social media like Twitter, Facebook, etc. in the recent popular uprisings in the Middle East. Please keep them coming, Jon.

Proximate vs. Ultimate Causes by jonfwilkins


[1] Jon has several other talents worth mentioning: he's an evolutionary biologist, an award-winning poet, a consumer of macroeconomic quantities of coffee as well as a genuinely nice guy.

posted February 13, 2011 04:58 PM in Scientifically Speaking | permalink | Comments (0)

November 05, 2010

Nathan Explains Science, welcome to the blogosphere!

My good friend Nathan Collins, whom I worked with at the Santa Fe Institute, has now joined the blogosphere with Nathan Explains Science. Welcome!

Nathan is a former theoretical astrophysicist who holds a PhD in political science. The first time I met him, I thought this meant that he studied awesome things like galactic warfare, blackhole coverups, and the various environmental disasters that come with unregulated and rampant terraforming. Fortunately for us, he instead studies actual politics and social science, which is probably more useful (and sadly more interesting) than astro-politics.

Here's Nathan explaining why he's now also a blogger:

...this gives me an place to tell you about science news that I think is interesting but that isn't necessarily going to get published in Science News or Nature's news section. For a variety of reasons, social science news especially doesn't get discussed as science, and that's unfortunate because there are scientific results coming out of psychology, political science, and economics that are vitally important for understanding the problems we face and the solutions we should pursue. In fact, there are a lot of old results that people should know about but don't because social science news seems less attractive than, say, finding a galaxy farther away than any other.

And, if you want to read more about the science, try out these stories, in which Nathan explains the heck out of narcissism, what makes us vote, political grammar and baby introspection:

Is Narcissism Good For Business?
Narcissists, new experiments show, are great at convincing others that their ideas are creative even though they're just average. Still, groups with a handful of narcissists come up with better ideas than those with none, suggesting that self-love contributes to real-world success.

Sweaty Palms and Puppy Love: The Physiology of Voting
Does your heart race at the sight of puppies? Do pictures of vomit make you sweat? If so, you may be more likely to vote.

Politicians, Watch Your Grammar
As congressional midterm elections approach in the United States, politicians are touting their credentials, likability, and, yes, sometimes even their policy ideas. But they may be forgetting something crucial: grammar. A new study indicates that subtle changes in sentence structure can make the difference between whether voters view a politician as promising or unelectable.

‘Introspection’ Brain Networks Fully Formed at Birth
Could a fetus lying in the womb be planning its future? The question comes from the discovery that brain areas thought to be involved in introspection and other aspects of consciousness are fully formed in newborn babies...

More Evidence for Hidden Particles?
Like Lazarus from the dead, a controversial idea that there may be a new, superhard-to-spot kind of particle floating around the universe has made a comeback. Using a massive particle detector, physicists in Illinois have studied the way elusive particles called antineutrinos transform from type or "flavor" to another, and their data bolster a decade-old claim that the rate of such transformation is so high that it requires the existence of an even weirder, essentially undetectable type of neutrino. Ironically, the same team threw cold water on that idea just 3 years ago, and other researchers remain skeptical.

Update 6 November 2010: added a new story by Nathan, on hidden particles.

posted November 5, 2010 08:04 AM in Scientifically Speaking | permalink | Comments (0)

July 19, 2010

Top Secret America and the National Surveillance State

There's an excellent piece (the first of three; the others will appear later this week) at the Washington Post, titled Top Secret America, on the extensive buildup of highly classified, intelligence-related infrastructure since the 9-11 terrorist attacks. By "infrastructure", I mean the creation of hundreds of new government agencies and tasks forces, the construction of dozens of huge new governmental facilities for handling and analyzing intelligence, the investment of hundreds of billions of dollars in contracts to private companies, and perhaps even a little investment in developing genuinely new intelligence capabilities [1]. What struck me most is that the people at the top of the intelligence food chain, such as the Secretary of Defense and a former Director of National Intelligence [2], stated for the article that this new intelligence infrastructure is too complicated to understand and that no one has complete knowledge or authority over its various projects or funding.

Recently, at the recommendation of friend and former intel insider, I read Tim Shorrock's Spies For Hire, which describes in great detail the trials and tribulations of the US intelligence community over the past 20-30 years and the ultimate privatization of about 70% of the intelligence community [3,4]. The Washington Post piece paints a more foreboding picture than Shorrock does by suggesting that the build-up after 9-11 was somehow a departure from previous trends. Shorrock, however, argues that the Clinton administration played a large role in the outsourcing of sensitive intelligence-related governmental activities to the private sector and that 9-11 simply accelerated that trend by providing enough money to support a feeding frenzy by private intel companies and defense-related agencies.

One of Shorrock's main concerns is that the huge buildup of intelligence infrastructure, with much of it held privately and with so little systematic oversight and public accountability, provides an immense potential for abuse both by well-meaning government agencies and self-interested individuals. Almost surely there will be bad outcomes as a result of all this buildup, and the fact that these systems are so shrouded in secrecy means that it may be left up to whistle-blowers, clever investigative journalists, and well-monied lawsuits to ferret them out, publish the transgressions in the press, and basically embarrass the government into behaving better. This does not seem like a good way to design a functioning system.


[1] Surely, many of these things are useful [5] and many probably legitimately improve national security. The question is whether the huge amount of money invested since 9-11 has been used wisely, whether it has legitimately addressed the problems identified by the 9-11 Commission, and whether it's building a sustainable, functioning capability for reducing legitimate threats to national security in the long-term. On these counts, the evidence doesn't seem encouraging.

[2] The Office of the Director of National Intelligence issued a short press release today claiming that the Washington Post story is wrong and that everything is going well.

[3] Apparently, 70% of total US government expenditures on intelligence go to private companies through contracts to provide various hardware, software, logistical support, and services. Some of this was probably unavoidable. For instance, the National Security Agency (NSA) produced the best science and technology on cryptography for most of the 20th century. But, with the rise of the commercial computing industry, the NSA's efforts were eclipsed and the best capabilities now often lie outside the government. Thus, it seems reasonable that some capabilities would be sourced from outside the government. But, it's not clear that 70% of the necessary work has to be supplied by private companies, especially analysis-related work.

[4] For more information, I've been recommended these sources: Secrecy News, a blog by Steven Aftergood, The Spy Who Billed Me, a blog by R. J. Hillhouse, or Outsourced, a book by R. J. Hillhouse on intelligence outsourcing.

[5] For instance, I didn't know that the maps used by Google Earth were originally provided by a company called Keyhole Inc, which was partly funded by the CIA's venture capital fund.

posted July 19, 2010 01:16 PM in Political Wonk | permalink | Comments (0)

March 01, 2009

The future of privacy

Bruce Schneier (wikipedia page) recently wrote a nice essay on the consequences of computer technology on privacy. Here's the opening paragraph

Welcome to the future, where everything about you is saved. A future where your actions are recorded, your movements are tracked, and your conversations are no longer ephemeral. A future brought to you not by some 1984-like dystopia, but by the natural tendencies of computers to produce data.

Schneier hits the issue on the head: increasingly, our actions and statements are not lost in the sands of time, but are recorded, stored, and analyzed for profit and power. Sometimes recording information about yourself is desirable, since it can create a convenient way to remember things you might've forgotten [1]. But right now, it's rarely you who actually stores and controls the data on yourself. Instead, corporations and governments "own" data about you [2], and use it to advance their own interests.

Ideally, we would each choose how much personal data to divulge and to which party we divulge it based on how much we value the services that use our personal data. For instance, advertising is a modern annoyance that could theoretically be made less annoying if advertisers could be more targeted. That is, part of the reason advertising is annoying is that most of the ads we see are not remotely interesting to us. Enter the miracle of personal data: if only advertisers knew enough about each of our real interests, then they would know which ads weren't interesting to us, and they would show us only ads that were actually interesting! This argument is basically a lie [3], but it highlights the idea that there should be a tradeoff between our privacy and our convenience, and that we should get to choose which we value more.

My favorite example of this tradeoff is Facebook, a place where people divulge all sorts of private information. Given that Facebook holds a treasure trove of demographic, interest, social data, and ad targets, its an obvious business plan to try to monetize it through advertising. Facebook's efforts to do so, e.g., their Beacon initiative and the recent revision to their Terms of Service, have gotten a strong backlash because people really do care about how their personal data is used, and whether its being used in a way that serves their interests or another's [4].

Another Facebook example comes from employers monitoring their employees' Facebook pages, and holding them accountable for their private actions (e.g., here and here). This issue exemplifies a deeper problem with the public availability of private data. Schneier mentions in his opening paragraph that it's bad that conversations are often no longer ephemeral. But, what does that really mean? Well, consider what it might be like to try to run for public office (say, Congress) in 2030, having grown up with most of your actions and statements being recorded, by Facebook, by modern advertisers, etc. During the campaign, all those records of the stupid, racy, naive things you did or said when you were young, innocent and didn't know better will come back to haunt you. In the past, you could be assured that most of that stuff was forgotten, and you could grow into a new, better, more mature person by leaving your past behind. If everything is recorded, you can never leave your past behind. Nothing is forgotten, and nothing is truly forgiven.

So, the cost of losing our privacy is not measured simply in terms of how much other people know about our current actions and attitudes, which is a high cost anyway. It's also the cost of defending your past actions and statements (potentially even those of your childhood), and of having those judged by unfriendly or unsympathetic voices. Sometimes I wonder whether blogging now will hurt me in the future, since it would be easy for a future potential employer to trawl my blog for statements that seem controversial or attitudes that they deem undesirable. There used to be a stronger respect for the division between public and private lives, but I think that's been fading for a long time now. Blogs are public forums. Facebook is a semi-public forum [5]. Your workplace is under surveillance by your employer. Your streets are watched by the government (for your protection, naturally). In fact, the only truly private place is your home [6].

The upside of recording everthing, and a point missed by Schneier, is that it's not easy to use all this data in a coherent and coordinated fashion. Credit card companies know a tremendous amount about each of us from our purchase histories, but they struggle to use that information effectively because they don't have the computational tools to individually understand their customers. Instead, they build aggregate profiles or "segments", and throw out all the other details. Although the computational tools will certainly improve, and there will be startling revelations about how much corporations, governments and our neighbors know about us, I'm not terribly worried about the dystopian future Schneier paints. That is, for most of us, we'll be hiding in plain sight because there will be too much information out there for us to stick out. The real dangers lie in believing that you shouldn't be careful about what you let be recorded, that you can avoid being noticed regardless of what you do or say (aka security through obscurity), or that you can continue hiding once you've been noticed. Privacy is not dead, it's just a lot more complicated than it used to be.


[1] My favorite feature of Safari is the "Reopen All Windows From Last Session" one, which lets me remember what I was looking at before I rebooted my computer, or before Safari crashed.

[2] Who owns this data is a critical question that will ultimately require an act of Congress to sort out, I think. (I wonder if copyright law will eventually be applied here, in the sense that I "author" the data about myself and should thus be able to control who is able to profit from it.)

Generally, I come down on the side of personal control over data about yourself, at least for private parties. That is, I should be able to authorize a company to use data about myself, and I should be able to revoke that authority and know that the company will not store or sell information about me to other party. With governments, I think the issue is a little trickier, since I think they have legitimate reasons to know some things about their citizens.

[3] The marginal cost is so low for showing an ad to someone who's not interested in it that you'd be crazy to expect economics to drive advertisers to show you less of them. Besides, it's hard to say before seeing an ad whether we're actually not interested in it.

[4] This point makes it clear that businesses have a legitimate path to getting a hold of their customer's personal information, which is to give people something in return for it. Ideally, this would be a customized service that utilizes the personal data to make better recommendations, etc., but sadly it's often a one-time payment like a discount and the data is then sold to advertisers.

[5] To their credit, Facebook gives its users better control over who can see what aspects of their profile than many past social networking websites.

[6] And if you live in a city, attached to city services, even your home is not as private as you might think. One of my favorite examples of this comes from testing the raw sewage of a neighborhood for traces of illegal drugs.

posted March 1, 2009 01:48 PM in Things to Read | permalink | Comments (3)

January 20, 2009

Inauguration Day

At the risk of sounding understated, today's events were very exciting. Congratulations President Obama!

posted January 20, 2009 10:31 PM in Political Wonk | permalink | Comments (0)

November 02, 2008


The election is coming up very quickly, and just in time there is a statistical analysis of the likelihood that your vote will decide the election. [1] The analysis is due to Andrew Gelman and colleagues [2], and below is their main figure, showing which states are likely to be the closest races (lighter colors). For instance, my current home state of New Mexico (which was won by Gore by less than 500 or so votes in 2000, and won by Bush by less than 3000 votes or so in 2004 [3]) is one of the places where the national presidential election could come down to a single vote. This makes me especially glad that I got my vote in early (via absentee, since I'm in New York right now).

For those of you who haven't voted, but can, please make the rational choice [4] and vote in this election.


For those interested in the details, here's the abstract for Gelman's writeup [5]:

One of the motivations for voting is that one vote can make a difference. In a presidential election, the probability that your vote is decisive is equal to the probability that your state is necessary for an electoral college win, times the probability the vote in your state is tied in that event. We compute these probabilities for each state in the 2008 presidential election, using state-by-state election forecasts based on the latest polls. The states where a single vote is most likely to matter are New Mexico, Virginia, New Hampshire, and Colorado, where your vote has an approximate 1 in 10 million chance of determining the national election outcome. On average, a voter in America has a 1 in 60 million chance of being decisive in the presidential election.

(tip to Jake.)


[1] For those who haven't yet discovered them, there are many places that are doing interesting kinds of forecasting for this election. is one that is frequently mentioned to me, which does sophisticated voting simulations.

[2] Gelman blogs the analysis at the Statistical Modeling, Causal Inference, and Social Science blog, and there's some additional commentary at Red State Blue State Rich State Poor State.

[3] These figures are, I think, roughly what I heard at a recent political rally, but they could be wrong either because of a bum memory or a biased source.

[4] Edlin, Gelman and Kaplan, "Voting as a Rational Choice: Why and How People Vote To Improve the Well-Being of Others." Rationality and Society 19, 293 (2007). Gelman has blogged some additional comments on this topic here.

[5] Gelman, Silver, and Edlin, "What is the probability your vote will make a difference?" Pre-print (2008).

posted November 2, 2008 03:03 PM in Political Wonk | permalink | Comments (0)

March 28, 2008

Food for thought (2)

This is an exceptionally well done piece of grass-roots boosterism for Obama. Also, his speech was pretty good. Back in February, I went to both a Clinton rally and an Obama rally. Obama was a significantly better orator than Clinton, for sure.

posted March 28, 2008 08:33 AM in Political Wonk | permalink | Comments (0)

September 12, 2007

Is terrorism getting worse? A look at the data. (part 2)

Cosma and Matt both wanted to see how the trend fares when we normalize by the increase in world popluation, i.e., has terrorism worsened per capita. Pulling data from the US Census's world population database, we can do this. The basic facts are that the world's population has increased at a near constant rate over the past 40 years, going from about 3.5 billion in 1968 to about 6.6 billion in 2007. In contrast, the total number of deaths per year from terrorism (according to MIPT) has gone from 115 (on average) over the first 10 years (1968 - 1977), to about 3900 (on average) over the last 10 years (1998 - 2007). Clearly, population increase alone (about a factor of 2) cannot explain the increase in total deaths per year (about a factor of 34).

However, this view gives a slightly misleading picture because the MIPT changed the way it tracked terrorism in 1998 to include domestic attacks worldwide. Previously, it only tracked transnational terrorism (target and attackers from different countries), so part of the the apparent large increase in deaths from terrorism in the past 10 years is due to simply including a greater range of events in the database. Looking at the average severity of an event circumvents this problem to some degree, so long as we assume there's no fundamental difference in the severity of domestic and transnational terrorism (fortunately, the distributions pre-1998 and post-1998 are quite similar, so this may be a reasonable assumption).

The misleading per capita figure is immediately below. One way to get at the question, however, is to throw out the past 10 years of data (domestic+transnational) and focus only on the first 30 years of data (transnational only). Here, the total number of deaths increased from the 115 (on average) in the first decade to 368 in the third decade (1988-1997), while the population increased from 3.5 billion in 1968 to 5.8 billion in 1997. The implication being that total deaths from transnational terrorism have increased more quickly than we would expect based on population increases, even if we account for the slight increase in lethality of attacks over this period. Thus, we can argue that the frequency of attacks has significantly increased in time.

The more clear per capita-event figure is the next one. What's remarkable is that the per capita-event severity is extremely stable over the past 40 years, at about 1 death per billion (dpb) per event. This suggests that, if there really has been a large increase in the total deaths (per capita) from terrorism each year (as we argued above), then it must be mainly attributable to an increase in the number of lethal attacks each year, rather than attacks themselves becoming worse.

So, is terrorism getting worse? The answer is typically no, but that terrorism is becoming a more popular mode of behavior. From a policy point of view, this would seem a highly problematic trend.

A. Clauset, M. Young and K. S. Gledistch, "On the Frequency of Severe Terrorist Attacks." Journal of Conflict Resolution 51(1): 58 - 88 (2007).

posted September 12, 2007 08:12 AM in Political Wonk | permalink | Comments (2)

September 11, 2007

Is terrorism getting worse? A look at the data.

Data taken from the MIPT database and includes all events that list at least one death (10,936 events; 32.3% as of May 17, 2007). Scatter points are the average number of deaths for lethal attacks in a given year. Linear trend has a slope of roughly 1 additional death per 20 years, on average. (Obviously, a more informative characterization would be the distribution of deaths, which would give some sense of the variability about the average.) Smoothing was done using an exponential kernel, and black triangles indicate the years (1978, 1991 and 2006) of the local minima of the smoothed function. Other smoothing schemes give similar results, and the auto-correlation function on the scatter data indicates that the average severity of lethal attacks oscillates with a roughly 13 year periodicity. If this trend holds, note that 2006 was a low-point for lethal terrorism.

A. Clauset, M. Young and K. S. Gledistch, "On the Frequency of Severe Terrorist Attacks." Journal of Conflict Resolution 51(1): 58 - 88 (2007).

posted September 11, 2007 10:33 AM in Political Wonk | permalink | Comments (3)

August 18, 2007

WikiScanner reveals biased edits of Wikipedia

I met Virgil Griffith, a graduate student at CalTech in cognitive science (but also a skilled computer hacker), earlier this year while he was visiting SFI to work with Eric Smith and Doyne Farmer.

Virgil's gotten himself quite a bit of coverage over a simple piece of software he wrote (apparently at least partially while he was at SFI) called WikiScanner that simply pushes the IP addresses of the editors of a particular wikipedia page against things like the the WHOIS database (for example, here) and a geolocation algorithm for IP address (such as to find out whether certain, achem, interested parties are editing certain pages related to their interests. Predictably, many corporations, from PepsiCo to ExxonMobile to FoxNews, have removed portions (typically the unfavorable bits) of the wikipedia articles about themselves.

Well done, Virgil, well done.

I think Wired wrote the first story, but there's now an ABC news story and a NYTimes story, too.

posted August 18, 2007 08:40 PM in Political Wonk | permalink | Comments (0)

August 14, 2007

Douglas Adams on the scientific method

I recently finally got around to reading Douglas Adams' Dirk Gently's Holistic Detective Agency, and thoroughly enjoyed it. I especially liked the bit about the horse. In addition to being a very clever writer, Adams was a passionate supporter of sense and rationality. At a 1998 conference at Cambridge, he gave an impromptu speech about religion and science. Here's an excerpt.

Now, the invention of the scientific method and science is, I'm sure we'll all agree, the most powerful intellectual idea, the most powerful framework for thinking and investigating and understanding and challenging the world around us that there is, and that it rests on the premise that any idea is there to be attacked and if it withstands the attack then it lives to fight another day and if it doesn't withstand the attack then down it goes. Religion doesn't seem to work like that; it has certain ideas at the heart of it which we call sacred or holy or whatever. That's an idea we're so familiar with, whether we subscribe to it or not, that it's kind of odd to think what it actually means, because really what it means is 'Here is an idea or a notion that you're not allowed to say anything bad about; you're just not. Why not? - because you're not!' If somebody votes for a party that you don't agree with, you're free to argue about it as much as you like; everybody will have an argument but nobody feels aggrieved by it. If somebody thinks taxes should go up or down you are free to have an argument about it, but on the other hand if somebody says 'I mustn't move a light switch on a Saturday', you say, 'Fine, I respect that'. The odd thing is, even as I am saying that I am thinking 'Is there an Orthodox Jew here who is going to be offended by the fact that I just said that?' but I wouldn't have thought 'Maybe there's somebody from the left wing or somebody from the right wing or somebody who subscribes to this view or the other in economics' when I was making the other points. I just think 'Fine, we have different opinions'. But, the moment I say something that has something to do with somebody's (I'm going to stick my neck out here and say irrational) beliefs, then we all become terribly protective and terribly defensive and say 'No, we don't attack that; that's an irrational belief but no, we respect it'.

If you enjoyed that, there's a clip at with Richard Dawkins reading a portion of the larger speech from which the above excerpt comes.

posted August 14, 2007 11:12 AM in Political Wonk | permalink | Comments (0)

January 30, 2007

Science funding

Since most of my intellectual activities depend, in some way, on the generosity of the American people (via taxes) and the political will of politicians, I can't help but follow the problems of funding for science. For those of you not nearly as obsessed with the relationship between science and our society, let me catch you up on the recent political turmoil. President Bush announced the "American Competitiveness Initiative" in his State of the Union 2006 address, which proposed to substantially increase federal funding of science (via agencies like NSF, NIST and the DOE). But, as is usually the case with such programs, there was always the question of whether real money would follow the promise. Then, when funding for FY2007 started getting tight, Congress froze almost all government agencies' funding at their FY2006 levels, which basically killed the idea of increasing funding for science. But, in a recent turn-about (largely the result of Democrats' actions), Congress passed a "continuing resolution" that would exempt the main basic-science agencies from the freeze. From the Computing Research (CRA) Policy Blog:

Science was one of just a few priorities protected by Congressional Democrats in the bill -- it joins federal highway programs, veteran's health care, the FBI and local law enforcement, and Pell grant funding.

The result is that the basic-science agencies will see a slight increase in funding, although not quite what the President's initiative promised. Good news for science, and good news for society. Why the latter? Because this kind of investment is what makes our country special, and thus worth defending, in the first place.

(Tip to Lance Fortnow.)

posted January 30, 2007 08:01 PM in Political Wonk | permalink | Comments (0)

November 06, 2006

Vote today.

If you live in the US and have not already voted (I voted early, and it was a snap), please please do so today.

Having enjoyed many other quotations by Aldous Huxley, I thought he might have one relevant to today's election. Here's the best I could find: "The most shocking fact about war is that its victims and its instruments are individual human beings, and that these individual beings are condemned by the monstrous conventions of politics to murder or be murdered in quarrels not their own."

posted November 6, 2006 10:28 PM in Political Wonk | permalink | Comments (0)

September 08, 2006

Academic publishing, tomorrow

Imagine a world where academic publishing is handled purely by academics, rather than ruthless, greedy corporate entities. [1] Imagine a world where hiring decisions were made on the techincal merit of your work, rather than the coterie of journals associated with your c.v. Imagine a world where papers are living documents, actively discussed and modified (wikified?) by the relevant community of interested intellectuals. This, and a bit more, is the future, according to Adam Rogers, a senior associate editor at "Wired" magazine. (tip to The Geomblog)

The gist of Rogers' argument is that the Web will change academic publishing into this utopian paradise of open information. I seriously doubt things will be like he predicts, but he does raise some excellent points about how the Web is facilitating new ways of communicating technical results. For instance, he mentions a couple of on-going experiments in this area:

In other quarters, traditional peer review has already been abandoned. Physicists and mathematicians today mainly communicate via a Web site called arXiv. (The X is supposed to be the Greek letter chi; it's pronounced "archive." If you were a physicist, you'd find that hilarious.) Since 1991, arXiv has been allowing researchers to post prepublication papers for their colleagues to read. The online journal Biology Direct publishes any article for which the author can find three members of its editorial board to write reviews. (The journal also posts the reviews – author names attached.) And when PLoS ONE launches later this year, the papers on its site will have been evaluated only for technical merit – do the work right and acceptance is guaranteed.

It's a bit hasty to claim that peer review has been "abandoned", but the arxiv has certainly almost completely supplanted some journals in their role of disseminating new research [2]. This is probably most true for physicists, since they're the ones who started the arxiv; other fields, like biology, don't have a pre-print archive (that I know of), but they seem to be moving toward open access journals for the same purpose. In computer science, we already have something like this, since the primary venue for publication is in conferences (which are peer reviewed, unlike conference in just about every other discipline), and whose papers are typically picked up by CiteSeer.

It seems that a lot of people are thinking or talking about open access this week. The Chronicle of Higher Education has a piece on the momentum for greater open access journals. It's main message is the new letter, signed by 53 presidents of liberal arts colleges (including my own Haverford College) in support of the bill currently in Congress (although unlikely to pass this year) that would mandate that all federally funded research be eventually made publicly available. The comments from the publishing industry are unsurprisingly self-interested and uninspiring, but they also betray a great deal of arrogance and greed. I wholeheartedly support more open access to articles - publicly funded research should be free to the public, just like public roads are free for everyone to use.

But, the bigger question here is, Could any these various alternatives to the pay-for-access model really replace journals? I'm less sure of the future here, as journals also serve a couple of other roles that things like the arxiv were never intended to fill. That is, journals run the peer review process, which, at its best, prevents erroneous research from getting a stamp of "community approval" and thereby distracting researchers for a while as they a) figure out that it's mistaken, and b) write new papers to correct it. This is why, I think, there is a lot of crap on the arxiv. A lot of authors self-police themselves quite well, and end up submitting nearly error-free and highly competent work to journals, but the error-checking process is crucial, I think. Sure, peer review does miss a lot of errors (and frauds), but, to paraphrase Mason Porter paraphrasing Churchill on democracy, peer review is the worst form of quality control for research, except for all the others. The real point here is that until something comes along that can replace journals as being the "community approved" body of work, I doubt they'll disappear. I do hope, though, that they'll morph into more benign organizations. PNAS and PLoS are excellent role models for the future, I think. And, they also happen to publish really great research.

Another point Rogers makes about the changes the Web is encouraging is a social one.

[...] Today’s undergrads have ... never functioned without IM and Wikipedia and arXiv, and they’re going to demand different kinds of review for different kinds of papers.

It's certainly true that I conduct my research very differently because I have access to Wikipedia, arxiv, email, etc. In fact, I would say that the real change these technologies will have on the world of research will be to decentralize it a little. It's now much easier to be a productive, contributing member of a research community without being down the hall from your colleagues and collaborators than it was 20 years ago. These electronic modes of communication just make it easier for information to flow freely, and I think that ultimately has a very positive effect on research itself. Taking that role away from the journals suggests that they will become more about getting that stamp of approval, than anything else. With its increased relative importance, who knows, perhaps journals will do a better job at running the peer review process (they could certainly use the Web, etc. to do a better job at picking reviewers...).

(For some more thoughts on this, see a recent discussion of mine with Mason Porter.)

Update Sept. 9: Suresh points to a recent post of his own about the arxiv and the issue of time-stamping.

[1] Actually, computer science conferences, impressively, are a reasonable approximation to this, although they have their own fair share of issues.

[2] A side effect of the arXiv is that it presents tricky issues regarding citation, timing and proper attribution. For instance, if a research article becomes a "living" documents, proper citation becomes rather problematic. For instance, which version of an article do you cite? (Surely not all of them!) And, if you revise your article after someone posts a derivative work, are you obligated to cite it in your revision?

posted September 8, 2006 05:23 PM in Simply Academic | permalink | Comments (3)

August 12, 2006

Your academic-journal dollars at work

Having now returned from a relaxing and rejuvenating trip to a remote (read: no Internet) beach with my family, I am trying to catch up on where the world has moved since I last checked. Comfortably, it's still in one piece, although I'm not thrilled about the latest draconian attempts to scare people into feeling safe about flying in airplanes. Amazingly, only half of the 300 emails I received were spam, and what remained were relatively quickly dispatched. In catching up on science news, I find a new movement afoot to stop Elsevier - the ruthless, and notoriously over-priced, academic publishing house - from organizing arms fairs via one of its subsidiaries. Having recently watched the excellent documentary Why We Fight, on the modern military-industrial complex, this makes me a little concerned.

I've only refereed once for any Elsevier journal, and I now plan to never referee for any of them again. This idea is, apparently, not uncommon among other scientists, e.g., here, here and here. Charging exorbitant prices to under-funded academics who produce and vet the very same content being sold is one thing - exploitative, yes; deadly, no - but arms fairs are a whole different kind of serious. Idiolect is running a petition against this behavior.

Update, Aug. 24: Digging around on YouTube, I found this interview with Eugene Jarecki, the director of Why We Fight.

posted August 12, 2006 09:19 PM in Simply Academic | permalink | Comments (0)

July 31, 2006

Criticizing global warming

Dr. Peter Doran, an antarctic climate resesarcher at UIC, was the author of one of two studies that the polemicists like to use to dispute global warming. Although he's tried to correct the out-of-control spinning on the topic that certain deniers are wont to do, he's been largely unsuccessful. Politics and news, as always, trump both accuracy and honesty. In a recent article for the Amherst Times (apparently pulled mostly from his review of An Inconvenient Truth, which he gives "two frozen thumbs up"), he discusses this problem, and the facts. From the original:

...back to our Antarctic climate story, we indeed stated that a majority -- 58 percent -- of the continent cooled between 1966 and 2000, but let’s not forget the remainder was warming. One region, the Antarctic Peninsula, warmed at orders of magnitude more than the global average. Our paper did not predict the future and did not make any comment on climate anywhere else on Earth except to say, in our very first sentence, that the Earth’s average air temperature increased by 0.06 degrees Celsius per decade in the 20th century.

New models created since our paper was published have suggested a link between the lack of significant warming in Antarctica to the human-induced ozone hole over the continent. Besides providing a protective layer over the Earth, ozone is a greenhouse gas. The models now suggest that as the ozone hole heals, thanks to world-wide bans on harmful CFCs, aerosols, and other airborne particles, Antarctica should begin to fall in line and warm up with the rest of the planet. These models are conspicuously missing from climate skeptic literature. Also missing is the fact that there has been some debate in the science community over our results. We continue to stand by the results for the period analyzed, but an unbiased coverage would acknowledge the differences of opinion.

Tip to onegoodmove.

posted July 31, 2006 04:18 PM in Global Warming | permalink | Comments (0)

July 08, 2006

Electronic Frontier Foundation

For many years, I've been interested in the evolution of intellectual property law and of our civil liberties in a fully-wired world (where privacy is non-existent, by default). Typically, I get my updates on these fronts from Ars Technica, a techie site that follows these things closely. Net Neutrality (see here and here) is a big issue right now, and Congress is holding hearings about just how democratic it wants online speech to be (which is to say, some very deep pockets like Comcast and AT&T are trying to make it very undemocratic). One of the champions of keeping the digital world sane (by any reasonable measure) is the Electronic Frontier Foundation (EFF), who recently sued AT&T (new slogan: Your world. Delivered. To the NSA.) over their collusion with the NSA to violate the civil liberties of millions of Americans. MSNBC has an article that discusses a bit of the EFF's history and ongoing work that's worth a quick read. After that, I highly recommend popping over to the EFF's website, and, if you are so motivated, give them a little support.

posted July 8, 2006 01:35 AM in Political Wonk | permalink | Comments (0)

February 01, 2006

Defending academic freedom

Michael Bérubé, a literature and culture studies professor at Penn. State University, has written a lecture (now an essay) on the academic freedom of the professoriat and the demands by (radical right) conservatives to demolish it, through state-oversight, in the name of... academic freedom. The Medium Lobster would indeed be proud.

As someone who believes deeply in the importance of the free pursuit of intellectual endeavors, and who has a strong interest in the institutions that facilitate that path (understandable given my current choice of careers), Bérubé's commentary resonated strongly with me. Primarily, I just want to advertise Bérubé's essay, but I can't help but editorialize a little. Let's start with the late Sidney Hook, a liberal who turned staunchly conservative as a result of pondering the threat of Communism, who wrote in his 1970 book Academic Freedom and Academic Anarchy that

The qualified teacher, whose qualifications may be inferred from his acquisition of tenure, has the right honestly to reach, and hold, and proclaim any conclusion in the field of his competence. In other words, academic freedom carries with it the right to heresy as well as the right to restate and defend the traditional views. This takes in considerable ground. If a teacher in honest pursuit of an inquiry or argument comes to a conclusion that appears fascist or communist or racist or what-not in the eyes of others, once he has been certified as professionally competent in the eyes of his peers, then those who believe in academic freedom must defend his right to be wrong—if they consider him wrong—whatever their orthodoxy may be.

That is, it doesn't matter what your political or religious stripes may be, academic freedom is a foundational part of having a free society. At it's heart, Hook's statement is simply a more academic restatement of Voltaire's famous assertion: "I disapprove of what you say, but I will defend to the death your right to say it." In today's age of unblinking irony (e.g., Bush's "Healthy Forests" initiative) for formerly shameful acts of corruption, cronyism and outright greed, such sentiments are depressingly rare.

Although I had read a little about the radical right's effort to install affirmative action for conservative professors in public universities (again, these people have no sense of irony), what I didn't know about is the national effort to introduce legislation (passed into law in Pennsylvania and pending in more than twenty other states) that gives the state oversight ability of the contents of the classroom, mostly by allowing students (non-experts) to sue professors (experts) for introducing controversial material in the classroom. Thus, the legislature and the courts (non-experts) would be able to define what is legally permissible classroom content, by clarifying the legal term "controversial", rather than professors (experts). Bérubé:

When [Ohio state senator Larry Mumper] introduced Senate Bill 24 [which allows students to sue professors, as described above] last year, he was asked by a Columbus Dispatch reporter what he would consider 'controversial matter' that should be barred from the classroom. "Religion and politics, those are the main things," he replied.

All I can say in response is that college is not a kind of dinner party. It can indeed be rude to bring up religion or politics at a dinner party, particularly if you are not familiar with all the guests. But at American universities, religion and politics are two of the hundreds of things we discuss on a daily basis. It really is part of our job, even — or especially — if some of us have unpopular opinions on those subjects.

How else do we learn but by having our pre- and misconceptions challenged by those people who have studied these things, been trained by other experts and been recognized by their peers as an authority? Without academic freedom as defined by Hook and defended by Bérubé, a university degree will signify nothing more than having received the official State-sanctioned version of truth. Few things would be more toxic to freedom and democracy.

posted February 1, 2006 10:45 PM in Simply Academic | permalink | Comments (0)

July 26, 2005

Global patterns in terrorism; part III

Neil Johnson, a physicist at Lincoln College of Oxford University, with whom I've been corresponding about the mathematics of terrorism for several months, has recently put out a paper that considers the evolution of the conflicts in Iraq and Colombia. The paper (on arxiv, here) relies heavily on the work Maxwell Young and I did on the power law relationship between the frequency and severity of terrorist attacks worldwide.

Neil's article, much like our original one, has garnered some attention among the popular press, so far yielding an article at The Economist (July 21st) that also heavily references our previous work. I strongly suspect that there will be more, particularly considering the July 7th terrorist bombings in London, and Britain's continued conflicted relationship with its own involvement in the Iraq debacle.

Given the reasonably timely attention these analyses are garnering, the next obvious step in this kind of work is to make it more useful for policy-makers. What does it mean for law enforcement, for the United Nations, for forward-looking politicians that terrorism (and, if Neil is correct in his conjecture, the future of modern armed geopolitical conflict) has this stable mathematical structure? How should countries distribute their resources so as to minimize the fallout from the likely catastrophic terrorist attacks of the future? These are the questions that scientific papers typically stay as far from as possible - attempting to answer them takes one out of the scientific world and into the world of policy and politics (shark infested waters for sure). And yet, in order for this work to be relevant outside the world of intellectual edification, some sort of venture must be made.

posted July 26, 2005 12:57 AM in Scientifically Speaking | permalink | Comments (0)

May 18, 2005

Galloway on the Iraq

These are too interesting not to re-post here. Galloway on Iraq, in the Senate and on Hardball. Of particular interest is Coleman's refusal to accuse Galloway of directly profiteering, and Galloway's eloquent rebuttal of both the case for war and Coleman's McCarthyism.



(Video clips are mirrored from Norm Jenson's excellent blog One Good Move, and their coverage of the same.)

posted May 18, 2005 01:48 AM in Political Wonk | permalink | Comments (0)

April 10, 2005

Give the politicians swords and let God (or the dice) work out the details

This song produced by Hitachi's PR department to trumpet their recent breakthrough concerning storage technology wins the prize of most... bizarre.

In a twist of the surreal that only US Patent Law could create, Smuckers, the company that brings you the American wholesomenesss of jelly, tried to patent the premade peanut butter and jelly sandwich, and issue cease-and-desist orders to other companies making sandwiches.

Finally, we should all give thanks that upright citizens like this BestBuy employee are observant in this post-9/11 world. But then, in this post-9/11 world of ours, convenience is in the eye of the beholder, and when the beholder is conservative pundits, no truth is too inconvenient to circumvent. Democrats should be wary of engaging such creatures:

Beholder: a powerful, magical monster. Although a fairly weak hand-to-hand fighter, the Beholder can spawn Skeletons to do the fighting for him. He can also shoot powerful magic at his opponent, which makes him a formidable distance fighter.

Democrats, listen up, this bit concerns you:

It is recommended that knights and paladins with a high shielding skill attack the Beholder directly in a melee fight. Ignore the skeletons until you have killed the Beholder. It is recommended that you carry several Healing Runes or Life Fluids. Using the defensive mode of attack is recommended because of the number of opponents and the fierce magic attacks.

Warning, if he is heavely wounded he have a bigger range with his missles, beware!

I couldn't have said it better, myself.

posted April 10, 2005 11:48 AM in Political Wonk | permalink | Comments (0)

March 16, 2005

Some bad, some good.

A first for this blog, but probably not the last. Many things that I read or thought about today, some good, some bad:

Paul "Give me another war" Wolfowitz is Bush's absurd pick for the head of the World Bank. Especially sad considering that he could have had Bono. But maybe not that surprising considering his pick of John "The U.N. doesn't exist" Bolton for the ambassador to the UN position. Plus, two democratic senators (from Hawaii no less) failed to use all the neurons in their brains and allowed the Arctic National Wildlife Refuge to be flagged as an industrial development site. But at least the California constitution doesn't forbid gay marriage anymore (predictably, Bush backs bigots in response).

Religiosity is 40% genetic. And pastors have recently taken to evangelizing in the land of the godless: the British supermarket? Actually, perhaps this says something about why Americans are so religious. Think: genetic selection.

The X-chromosome is making lots of headlines. Seems it's a lot more complicated than anyone expected (and it was already pretty complicated). And then there's the amazingly slick idea to track the movement of nucleotides in DNA by creating a phony fifth base.

The real online pirates are a pretty interesting bunch. Although they still get caught by the Feds, and then interviewed by journalists. How long has identity theft been a problem in the US? How long have US companies been little better than a leaky bucket for our personal information? Congress may finally be realizing that this is a topic on which it's better to side with voters.

The finance pirates (or, as they are lovingly called in the movie Igby Goes Down, neo-fascists) also took several hits this week (oh, there's another hit). I'm not entirely convinced that corporate America is actually changing for the good, or if its dirty ways are just being pushed further underground. Let's just say there's a lot of hard work to be done before my faith is restored.

And finally, who says Ivy League professors are boring? Jon Stewart with Professor Harry Frankfurt from Princeton "On Bullshit".

There, and now you know most of the sites I visit on a daily basis.

p.s. Who says IRC isn't useful?

posted March 16, 2005 11:58 PM in Political Wonk | permalink | Comments (0)

February 09, 2005

Global patterns in terrorism

Although the severity of terrorist attacks may seem to be either random or highly planned in nature, it turns out that in the long-run, it is neither. By studying the set of all terrorist attacks worldwide between 1968 and 2004, we show that a simple mathematical rule, a power law with exponent close to two, governs the frequency and severity of attacks. Thus, if history is any basis to predict the future, we can predict with some confidence how long it will be before the next catastrophic attack will occur somewhere in the world.

In joint work with Max Young, we've discovered the appearance of a surprising global pattern in terrorism over the past 37 years. The brief write up of our findings is on, and can be found here.

Update: PhysicsWeb has done a brief story covering this work as well. The story is fairly reasonable, although the writer omitted a statement I made about caution with respect to this kind of work. So, here it is:

Generally, one should be cautious when applying the tools of one field (e.g., physics) to make statements in another (e.g., political science) as in this case. The results here turned out to be quite nice, but in approaching similar questions in the future, we will continue to exercise that caution.

posted February 9, 2005 10:30 PM in Scientifically Speaking | permalink | Comments (2)

February 06, 2005

Culture and the politics of marriage

Marriage is supposed to be a celebration of life and commitment. It is supposed to be a time when two people decide to share their lives in the most intimate of ways - becoming a single taxable entity. Although there are certainly a great many incentives for marriage, e.g., studies often show it's a long-term investment in one's health, there are other advantages to remaining single, like greater independence and flexibility in picking out furniture.

Yet despite the fundamental importance of the family unit to the continuation of the human race, the trend is for Western young adults to stay "single" longer and marry later, particularly when they are well-educated or perhaps part of the "creative class". I suspect another positive correlation with a lengthier single-hood: living in a dense urban area, such as New York City or San Francisco provides such a vast array of opportunities (both social and career-wise), that we young folk are reluctant to close off any of those possibilities. Even with popular shows like Sex and the City moralizing about the benefits of marriage, the primary audience of the show don't seem to be imitating it in that particular aspect.

This fact is one more reason to suggest that the conservative movement to promote the misnamed "family values" (which is largely a religious cover for homophobia and bigotry) is fundamentally out of step with the culture as a whole. In fact, it betrays the movement's fundamental hypocrisy with regard to modern culture. This is best illustrated by, on the one hand, the monotonous promotion of "marriage" as a cure for all cultural ills, and on the other hand, the denial of that very institution (and all of the benefits that go along with it) to gays. Taking a slightly broader perspective, the movement's condemnation of social liberalism (e.g., tolerance of homosexuality, etc.), contrasted with its lustful relationship with corporate liberalism, is inherently two-faced.

Is it possible for politics to not sully itself with the commingling of self-serving power and the craven materialism? Apparently not.

posted February 6, 2005 08:48 PM in Political Wonk | permalink | Comments (0)

January 28, 2005

On Missile Defense (part II)

During the summer of 2001, I became extremely interested in the debate over National Missile Defense, formerly known as Star Wars, that was being mentioned with some frequency ("non-debate" is a better description - the US media continues to believe that Americans cannot digest sophisticated arguments and instead relies on posturing, implication and punditry in lieu of more inspirational reporting). In response to these, and my complete ignorance on the topic, I set out to understand the issue.

The extension of this entry contains the second of those articles, a discussion of the merits of the US's current attempt to create a working national (as opposed to theatre) missile defense. Note that this article was written before September 11th, 2001.

Flirting with disaster

The American plan for buildinga national missile defense (NMD) system to protect the American people and our allies from nuclear and biochemically equipped ballistic missiles is a frightening flirtation with disaster, and the plan will surely shift the paradigm of nuclear safety world-wide. Standing in the way of any new system is the 1972 Russian-American Anti-Ballistic Missile (ABM) Treaty [1] which has played a central role in defining the current era of nuclear arms control by cementing deterrence, the policy ofmutually assured destruction, as the primary defense against nuclear and biochemical conflict.

The mainstream media has covered the Bush administration's speedy steps toward missile defense quite faithfully. However, they completely fail to describe the government policy of recent years which set the stage for the today's controversy. They have also largely ignored the nagging, and important question about what is truly at stake with the NMD plan, and whether it is the best way of achieving the goal of American security. How will having a missile defense system upset the tenuous international balance of power? Why is the Bush administration putting the NMD on a fast-track? And will the NMD truly protect the United States against the much touted 'rogue nations'? What alternatives are available to the NMD?

The importance of history

Before we can understand what the consequences of the new nuclear-arms paradigm, it is critical to understand the current one, and why nations are reluctant to leave it. See my historical perspective.

Treading on dangerous ground

In a world with multiple nuclear powers and an entrenched policy of deterrence, there are two principle dangers associated with a nation building a national missile defense, functional or not. (For this argument, we'll largely focus only on the nuclear threat, world-wide still the primary weapon of mass destruction, but bio-chemical agents may replace the warhead's payload. In terms of analysis, there is only a small difference between the two in the overall impact to international stability.)

First, having a NMD reduces the danger of engaging in a small-scale nuclear conflict as perceived by the nation bearing the shield. With the ability to neutralize a small volley of warheads, such a nation would have the prerogative to involve itself in conflicts which it previously avoided due to nuclear deterrence. It also reduces the importance of international diplomacy as the primary mechanism for maintaining a balanced and stable nuclear playing field. As has been shown in automobile driving, when a driver has more safety features protecting him, he tends to take more risks - that human risk-taking occurs within a well-defined 'comfort zone' [12] and installing safety measures widens the range of danger with which the driver is comfortable. Nuclear arms, in some sense, are nothing more than driving at the international level: by intentionally keeping everyone's comfort zones at a minimum, everyone drives a little more safely.

Second, with one nation possessing a national missile defense, all other nations with nuclear capability, and especially those who desire that ability, are encouraged to escalate their nuclear arms programs to surpass/overwhelm said missile shield. Technologically, it is significantly easier to make a missile harder to hit, than it is to get better at hitting them (e.g. hitting an ICBM is harder than trying to shoot a bullet out of the air with another gun). As early as 1983, when SDI was announced, the American military recognized that implementing any anti-ballistic missile system would likely provoke a technological competition of counter- vs. counter-counter-measures [11], so why is it suddenly acceptable to enter into a new world-wide arms-race?

Arms-races themselves are circular in nature. Because all nations resist an un-level playing field, any new weapon which gives a single player or group of players a distinct advantage obligates others to re-level the playing field by developing counter-measures which eliminate the advantage. Nuclear arms have been the single exception to this rule of war, most significantly because of the gentlemen's agreements embodied in the ABM [1] and NPT [7] Treaties. Why? Because a nation who does not recognize the danger of nuclear conflict will not be around very long to learn from the experience. Particularly with regard to the US's arms stockpile, any nation engaging in a nuclear exchange would be completely wiped from the face of the planet.

Thus, in a world balanced by the threat of nuclear retaliation (M.A.D.), peace is maintained by appealing to the human instinct of self-preservation. This reliance worked throughout the bi-polar Cold War, but in the new multi-polar world, doubt has been raised as to whether all nuclear, and more particularly, ballistic missile-enabled states will continue to work within this uncomfortable, but life-preserving peace.

Rogue states

The Clinton administration BMDO's 1993 charter actually called for it to develop and acquire missile defense systems for theater and national defense. It was thought that the delicate balance of nuclear deterrence would be easily toppled by an accidental launch or by an irrational nuclear-capable 'rogue' state [13]. This uncomfortable potential for the current Mexican standoff to be broken unexpectedly has been the keystone argument in justifying renewed attempts at developing a missile defense.

Imagine, for instance, if the Afganistan Taliban and/or Osama bin-Ladin became nuclear- and ICBM-capable (both required to nuke US soil). The United States government has been terrified for the past decade that the threat of all-out nuclear retaliation would not prevent such extremist groups from using their new-found nuclear power.

Before accepting this conjecture, we must ask what exactly constitutes a 'rogue' state? So far, the US government has only supplied examples: North Korea, Iran and Iraq. Other lesser threats include Libya, Syria, Cuba and Sudan [14]. A cynical appraisal measure's a nation's 'rogueness' by how little influence the U.S. holds in said nation. Said another way, rogue nations simply don't respect the US dominance in the world theater. Let us simply call them 'unfriendly' states, whose national interests conflict with the United States' national interests.

In 1993, however, there were no unfriendly states capable of launching a nuke-enabled ICBM that would reach US soil, nor did any appear to be able to develop such technology in the near future. In the CIA's 1995 National Intelligence Estimate (NIE) report estimated that within a 10 - 15 year period, there might develop one such agent [13]. Did that agent materialize? The Clinton administration was in charge for 8 years - why did it not pursue the NMD vigorously within the established time table and negotiate the appropriate changes to the ABM Treaty in preparation for its deployment in anticipation of a rogue state becoming nuclear? Clinton, it seemed, was uncomfortable with acquiescing to the Congressional hawks who had been calling for a NMD since the 1970s. Rather, he preferred diplomacy and continued arms-reduction as an arms policy.

Enter Donald Rumsfeld

In response to the NIE report, US Congressional Republicans, convinced that the danger was much closer than 2005 - 2010, created a 1998 bi-partisan commission, chaired by then former Secretary of Defense (under Gerald Ford) Donald Rumsfeld to file a more thorough report on the danger to US soil by ballistic missiles. The Rumsfeld Commission's report has become the guiding document for US NMD policy, and it doesn't seem to be a coincidence that Rumsfeld has returned as Bush Jr.'s Secretary of Defense.

The report's critical finding was that an unfriendly nation could, via in-house development and, more importantly, international assistance, become nuclear- and ICBM-capable in just 5 years. Additionally, such a nation could do this largely undetected by conventional US intelligence [14]. The report did not, however, advocate a NMD as a solution as it did not analyze any options of addressing the shortened time-table for nuclear proliferation.

Because of the Rumsfeld report and the uncertainty it indicated about the future nuclear landscape, Clinton reluctantly began to put more money behind the BMDO but refrained from making a decision about actual deployment. However, Clinton's preferred method of national security was diplomacy, not military power. In a world more economically interdependent, and more (internationally) democratic than that of the Cold War, military power is perhaps not as reliable a strategy was formerly true. Diplomacy, certainly, is a more reliable method of maintaining a status quo.

The Status Quo?

The NPT Treaty was intended to keep the division between nuclear and non-nuclear nations fixed. The US, Russia, China, Britain and France are the de facto nuclear powers (as in 1968 with the treaty's creation), and all but four other world nations have agreed to preserve that order. Lately, however, the Rumsfeld report claims that the NPT is failing. Isreal, Pakistan and India, three of the four nations who hadn't signed the treaty by 2000, are now all nuclear-able, in part because of systems and knowledge acquired from China and Russia in violation of the NPT [14].

However, Isreal, Pakistan and India are not classified as unfriendly nations. North Korea, Iraq and Iran, are implicated as principles in unbalancing the nuclear peace. What are their capabilities with and intentions for ballistic technology?

Iraq has apparently acquired some forbidden weapons materials, and is certainly a potential future aggressor, but for now is a minor threat with international weapons inspections and economic sanctions. North Korea, with its Taepo Dong-2 ballistic missile however, is capable of hitting US soil in Haiwaii and Alaska. North Korea has the greatest potential of acquiring the ability to target a larger portion of US soil in the future, too. You can imagine that the term 'rogue states' now primarily refers to North Korea.

But North Korea and Iran have both recently both become more amenable to the West, which in turn may reduce their potential for future ballistic aggression. Despite the Rumsfeld report's worst-case scenario time-table of 5 years between a foreign country's desire for and deployment of ICBM technology, the 1998 CIA response to the Rumsfeld report, and events to date, indicate that North Korea's BM program has followed a longer, by 5-10 years, schedule than the worst-case [15].

So why the continued parade about the danger of rogue states? The Rumsfeld report's description of the danger relies heavily upon playing up the 'uncertain transitions' within Russia, China, India, Pakistan, North Korea and Iran. By their measure, that uncertainty is uncomfortable because ballistic/nuclear systems, materials and/or knowledge is less well-controlled. To put it perhaps bluntly, the West is sane and stable, but everyone else might possibly become irrational and suicidal in the future. Additionally, China and Russia are implicated as major instigators of nuclear proliferation by allowing their systems and knowledge to be purchased on the international market [14]. The politicians, mostly Republican, concluded this situation necessitates a NMD for continued US security.

Full-speed ahead

In 2000, Clinton said he would make an executive decision regarding the deployment of an American NMD based on four criteria: the threat, the cost, the impact on U.S.-Russian nuclear arms reductions, and whether the system works [16]. Clinton decided against deployment under those criteria; however, Bush has stated (guided by Mr. Rumsfeld himself, now the Secretary of Defense again) that he plans to move ahead with NMD based merely upon the first criteria: the threat [3]. Politically, this kind of unilateral action will do nothing more than cement the growing international opinion of the US as an arrogant bully who believes that normal modes of diplomacy are subordinate to US security. Part of this disregard for the international court is linked to the Republican's desire to retain for the US options and flexibility for an uncertain future.

Dr. Richard Garwin, a member of the Rumsfeld Commission and an advisor to the US government for over 50 years on nuclear-arms matters, notes that the current US NMD is not scaled with respect to a threat from North Korea, but rather a threat from China [17]. Why is that? In his analysis, the NMD will prompt China to more quickly modernize its nuclear weapons, deploying them on mobile, sea-based launch vehicles so as to circumvent the US NMD. This fact complicates the apparent motivation for the NMD significantly. It would seem that China is worrying the US a little more than the US would like to admit. Additionally, it hits right upon the fundamental flaw of any NMD - the arms escalation and relative easy with which a nation so intent can circumvent the shield.

The path not taken

The Rumsfeld Commission report has become the justification for deploying a NMD, breaking with the ABM Treaty and forging ahead into unknown nuclear-arms territory. Bush and the Pentagon seem confident that the military's NMD will be able to protect the nation from any unknown surprises in the future, be they from North Korea, Iraq or even China. The few hundred interceptors planned would still be a small threat to Russia's immense arsenal.

The NMD initiative however, is likely destined to fail for several reasons. French President Jacques Chirac puts it rather succinctly in an interview with the New York Times, saying: "If you look at world history, ever since men began waging war, you will see that there's a permanent race between sword and shield. The sword always wins. The more improvements that are made to the shield, the more improvements are made to the sword." [16a,b]

Garwin states that the Rumsfeld report absolutely does not call for or justify the deployment of a NMD [18]. In the Bulletin for the Atomic Scientists, he asserts that any nation capable of ICBM technology, and not just the current nuclear powers, would necessarily also have the sophistication to employ the simple counter-measures necessary to completely defeat the American NMD. Biological weapons, employing 'bomblets' would be virtually impossible to destroy once the warhead separates from the ICBM carriage [19].

In place of the NMD, Garwin suggests a 'boost-phase interceptor' (BPI) which destroys a ballistic missile before the warhead (and counter-measures) can be deployed. BPIs must be stationed close to the launch site (a few hundred miles), and would be useless against missiles launched from inside Russia or China [19]. Russia has previously expressed support for a BPI program, and such a cooperation (as BPI stations on Russian soil would necessarily be jointly operated) may even improve shaky Russian-US relations. Lacking such international cooperation, BPI could still be installed on naval vessels close to potential unfriendly states.

Why is the Bush administration continuing to pursue an initiative that seemsso unlikely to achieve the publicly stated goal of ensuring US security? I can't offer an clear statement on that matter, but will instead offer the hypothesis that the Republican administration fears diplomacy and would prefer the self-reliance of a military solution.

To NMD or not to NMD

At its heart, national missile defense is a fatally flawed initiative. Its capacity for defending the United States against any nuclear attack is small considering the likely failure due to simple counter-measures. Additionally, its potential for igniting a new arms race among the nuclear powers, both old and new, is frightening. The threat to US security from such an arms race seems greater than that of cooperating internationally to contain nuclear proliferation in multi-polar world.

There still remains the potential that the US could be attacked by another nation at some point in the future. One worrying factor, and one which should be addressed immediately and diplomatically, is Russia and China's role in nuclear-proliferation. The Rumsfeld Commission's worst-case scenario estimation of a 5-year delay in a country wanting and getting working ICBM technology is a piece of knowledge worth chewing on, but it's not a sufficient justification for hastily deploying an expensive, easily circumvented and politically dangerous NMD.

A vigorous program to reduce the world's nuclear (world-wide, roughly 30,000 war-heads) and biochemical arsenals would do much to allay the US desire for ballistic protection. Also, by cooperatively working (with Russia, Britain, France and China) to develop and operate BPI programs, the US could breathe new life into the Non-Proliferation Treaty and the status quo it represents. Diplomacy, not the military, would seem to be the only national missile defense that has no counter-measure and which technology or political development will not render obsolete.

© July 2001, Aaron Clauset


[1] Anti-Ballistic Missle Treaty (1972)

[3] "Dropping the Bomb", Newsweek © 2001

[7] Non-Proliferation Treaty (1968)

[11] Carnagie Non-Proliferation Project

[12] Driver 'Comfort Zones'

[13] The End of the Star Wars Era (DoD News Briefing, 1993)

[14] Rumsfeld Commission report (1998)

[15] Robert Wadpole (CIA) on North Korea's Taepo Dong missile

[16a] New York Times with French President Jacques Chirac (1999)

[16b] John Isaacs on Missile Defense (2000)

[17] Interview with Dr. Richard Garwin (2000)

[18] Op-Ed by Dr. Richard Garwin, member of the Rumsfeld Commission (1998)

[19] Dr. Richard Garwin, "The Wrong Plan"

[b] Possible Soviet Responses to the US Strategic Defense Initiative (declassified)

[c] National Missle Defense

posted January 28, 2005 02:56 PM in Political Wonk | permalink | Comments (0)

January 19, 2005

On Missile Defense (part I)

During the summer of 2001, I became extremely interested in the debate over National Missile Defense, formerly known as Star Wars, that was being mentioned with some frequency ("non-debate" is a better description - the US media continues to believe that Americans cannot digest sophisticated arguments and instead relies on posturing, implication and punditry in lieu of more inspirational reporting). In response to these, and my complete ignorance on the topic, I set out to understand the issue.

This entry contains the first of those articles, a historical perspective on the US's interest in national (as opposed to theatre) missile defense. Note that this article was written before September 11th, 2001. Given the Bush administrations recent backing-off of their promise to have a fully functional NMD in place by the beginning of 2005, but the military's continued love affair with the delusion of safety from ICBMs and insistent on Bush-style insanity, this article remains both relevant and important today.

Holding a loaded gun : Mutually Assured Destruction (M.A.D.)

With the development of the nuclear bomb during WWII, the international stage took its first step into a new paradigm of unparalleled potential for destruction. Douglas Hofstadter, writing in 1985 [2], provides an illustrative example by Jim Geier and Sharyl Green, comparing the total destructive force of WWII with the force in 1981:

In a grid 11 x 11 squares, each square, save for the center one, has 50 dots in it; the center square has one dot. That center square represents the entire destructive power of World War II: 3 megatons (if translated into terms of nuclear warheads). The rest of the grid (120 squares, each with 50 dots) represents the destructive power of the world nuclear power in 1981: 18,000 megatons.

In Hofstadter's example, we note that in 1981 the world destructive power was 6,000 times (almost 4 orders of magnitude) that of WWII. The arsenal today is even larger. A recent Newsweek article [3] reports that the US alone has more than 18,820 nuclear warheads (each capped with perhaps tens of megatons), while the rest of the world clocks in perhaps half of that, including Russia's deteriorating complement. All in all, today's total destructive power, and the M.A.D. strategy [4], ensures that were nuclear war to happen, humans have the capability to destroy the world population an estimated 94 times over.

Putting the US destructive capability into perspective, although they have eighteen of them currently in service, a single American Ohio class submarine [5], with its complement of 192 warheads, could destroy most of Russia's population centers.

Building a bigger gun

As is universally known, much of the world's nuclear armament was developed during the Cold War, when the Soviet and American militaries perceived that an opponent with a greater arsenal would be a threat to the national defensibility. Thus, the historic arms race.

Yet even while the tit-for-tat arms race was continuing, the world's powers recognized the destructive potential being developed. Beginning in 1963, with the Limited Test Ban Treaty [6], 108 (about half) of the world's nations began negotiating several key treaties to limit the proliferation of nuclear arms, and established the M.A.D. paradigm as the chief deterrent for use of nuclear weapons. Although the treaties were essentially an honor code agreement that nations would not pursue the development of nuclear arsenals (if they did not already have them, which limited the nuclear powers to mostly western nations), the code has become the accepted paradigm for balancing nuclear power. In recent years, that code has been hedged (most notably by Pakistan and India), primarily by eastern nations wanting the status and respect accorded a nuclear power.

The nuclear arms situation in the Cold War can be thought of as a Mexican Standoff, in which two men, both armed with loaded guns or un-pinned hand-grenades, are forced to share a confined space (Earth). Regardless of niceties, spats or any desire to break out the can of whoop-ass, both recognize their own mortality, and thus they do nothing with their lethal weapons. Self-preservation rules this day.

Other people occupy the room as well (the rest of the world), and most of them tend to divide themselves into two groups: the East-side, and the West-side gangs. Conflict naturally arises between the two gangs, but self-preservation still keeps the lethal weapons out of play. Much posturing and many displays of strength, but generally, no one gets terribly hurt.

The gentleman's agreement

The 1972 ABM Treaty [1], an agreement between the USA and the USSR, restricts both nations' development of defenses against nuclear missiles (ballistic missiles). The treaty insured that M.A.D. would continue to deter the use of nuclear weapons. Specifically, the USA and the USSR agreed that neither would build a missile defense system which would protect the entire nation against the other's attacks. It did allow for the construction of small-scale missile defense in two agreed-upon locations (restricted to one in 1976) in each nation where a local missile defense could be constructed.

In our model of the nuclear balance, the ABM Treaty is an agreement between our two lethally-armed men to not develop Kevlar, bullet-proof vests or whatever which would fully protect against the other's weapons. The minor allowances for 'partial' defense amounted to allowing each man to wear a bullet-proof helmet; this maintains the standoff, while affording a small degree of protection for a vital organ.

While the Soviets elected to build a missile defense system around Moscow, the Americans deactivated their system at Great Forks, North Dakota in 1976 [1].

A second gentleman's agreement, in the form of the 1968 Non-Proliferation Treaty (NPT), divided the world into nuclear and non-nuclear countries, and required countries in different categories to not share nuclear arms-related information or resources with each other; thus, the NPT is the honor code that maintains the steady-state of the nuclear landscape. As of early 2000, it had been signed by all but four of the worlds nations (Cuba, Israel, India and Pakistan). Additionally, the Middle East was agreed to be a nuclear-free zone under a recent Treaty resolution [7].

The 'missile defense' trump

The balance of nuclear power in the world today was almost entirely determined by the development of nuclear power in the United States and the Soviet Union, who hold perhaps 90% of world's nuclear power, with the remaining 10% largely held by Europe. The Cold War, with its West (Us) vs. East (Them) mentality, is responsible for the current distribution of nuclear power. Both the US and the USSR nuclear strategies were designed with this simple bi-polar nuclear threat in mind.

As evidenced by national security documents in the Eisenhower administration, the United States has always been uneasy with the nuclear standoff [8]. There began the discussion about a national missile defense, although the primary goal was still to amass the larger arsenal. Still, there appeared never to be a commitment to the ABM Treaty for an indefinite period of time - rather, both the US and the USSR felt it a necessity of diplomacy until a time came when a full missile defense system could be put in place. That time never came for the Soviet Union as its economic and political turmoil was, in the end, the greater enemy.

In 1982, Reagan, still deep within the Cold War mentality, announced the Strategic Defense Initiative (SDI), which included the much-lambasted Star Wars space-based interceptors [9]. This initiative was partially in response to the Soviet's cutting corners with the ABM Treaty, deploying a forbidden radar system. Reagan wanted to make nuclear missiles "obsolete" by creating a global missile defense system and breaking with the ABM Treaty.

The end of the old paradigm

SDI never came to fruition, and in 1993, the unnecessary program was abandoned as a result of the demise of the Soviet Union and the apparent ending of the bi-polar paradigm. However, the Clinton administration continued funding anti-ballistic missile research, transmogrifying SDI into a less research-based program called the Ballistic Missile Defense Organization (BMDO) [10].

There are three types of missile defense: global missile defense (Star Wars/SDI), theater missile defense (Patriot missiles) and national missile defense (protecting a nation's people alone). The latter two were to be the primary goals of the Clinton Administration's BMDO. SDI was designed to resist a full surprise attack from the Soviet Union, now a quite unlikely event. Theater and national missile defense are envisioned to protect against smaller nuclear/missile forces, such as those in nations unfriendly to the West (Pakistan, India, Iraq, etc.). Most notably, the Patriot missile defense system was used to protect UN troops during the Gulf War of 1990. The Bush administration has not raised the NMD program from the corpse of SDI, but has actually just 'fast-tracked' a Clinton-era program.

With the deterioration of the Soviet Union as a world super-power, the United States (along with its allies) has been placed unopposed at the top of the heap. It's scary at the top, too - everyone else (who is not at the top) is probably not being entirely honest with you, and possibly plotting to take your place. One might develop a healthy sense of paranoia, as a result. In the world stage, this is perhaps more true, as international politics can be even messier than the mud-fights in American politics.

The new paradigm

In the Newsweek article, the Bush Administration claimed that "Missile defense is intended not to de-fang Russia but to deter rogue states (Ed. North Korea, Iran, Iraq, etc.) from trying to blackmail the United States with a nuclear-tipped rocket or two." But how much of a threat are such rogue states? And is upsetting the current balance of nuclear power an acceptable consequence for the development of such a system, trashing the ABM Treaty in the process?

The Carnagie Non-Proliferation Project [11] projects that 34 world countries either have or are developing ballistic missile capability. While not all of them are nuclear-capable, they still complicate the Mexican stand-off situation.

In our simple model, instead of simply a pair of men, we have close to two dozen, each carrying a weapon (gun, grenade, french-tickler). We still have our West-side and East-side gangs as well, although they're slightly less well-defined now. For the West-side gang-leader (that would be the US), the possibility of one of the other people in the room breaking the nuclear honor code and launching a snipe attack is maddening. The comfort of the bi-polar, black and white, world of the Cold War has been replaced with the complicated reality of a more democratic world stage. Additionally, with the East-side gang disassociating, it is possible that a single agent in the room might decide to be stupid, procure or develop a lethal weapon and attempt to use it.

How much threat?

Perhaps a NMD is warranted, if only to sooth the paranoia of the United States, and to protect itself in the uncertain post-Cold War nuclear landscape. However, before accepting the argument that rogue states warrant a NMD, some significant questions must be answered:

1. How will a US NMD change the international nuclear playing-field?

2. What constitutes a rogue state?3. Do any states of concern currently have both nuclear warheads and intercontinental ballistic missile (ICBM) technology (both required to nuke the US)?

4. Do any states of concern have the potential of developing/acquiring both technologies?

5. How do current NMD technologies perform against ICBMs with standard standard counter-measures.

6. Would deploying a NMD deter rogue states from nuking the US?

7. What alternatives to a NMD are there for preventing a rogue nation from nuking the US?

© July 2001, Aaron Clauset


[1] Anti-Ballistic Missle Treaty (1972)

[2] Metamagical Themas, by Douglas Hofstadter. ©1981

[3] "Dropping the Bomb", Newsweek © 2001

[4] Mutually Assured Destruction

In the 1983 movie entitled "War Games" [a] explained the danger of the M.A.D. strategy by showing that an artificial intelligence could understand the futility of the zero-sum game of nuclear war - i.e. that for me to win, you must lose. With nuclear war, however, the sum becomes negative as both sides actually lose, hence the mutual assurance of destruction.

[5] Ohio class ballistic submarine, United States Navy

[6] Limited Test Ban Treaty (1963)

[7] Non-Proliferation Treaty (1968)

[8] National Security Policy: Eisenhower Administration

[9] Strategic Defense Initiative (SDI), early version

[10] The Rise and Fall of SDI, by Alex Tonello (1997)

[11] Carnagie Non-Proliferation Project

[a] "War Games", directed by John Badham (1983)

posted January 19, 2005 09:00 AM in Political Wonk | permalink | Comments (0)

January 13, 2005

Terrorism: looking forward, looking back

This month's edition of The Atlantic has a pair of excellent articles which focus on terrorism and recent US policy about it. The first article, by former anti-terrorism chief Richard Clarke, is an imagined retrospective from the year 2011 on the decade that followed the declaration of the (ill-named) War on Terrorism. In it, he describes a nation which is only capable of reacting (poorly) to previously identified but largely ignored dangers of international terrorist strikes on US soil. Erring on the side of doom-sayer, Clarke paints a sobering yet compelling picture of how US domestic policy will slowly but surely reduce civil liberties and economic viability in favor of fortress-style security. The second, by long-time Atlantic correspondent James Fallows, describes in honest and uncomfortable detail the current head-in-the-sand security strategies being pursued by those in power. Drawing a strong analogy to the cautious and even-handed approach that Truman, Kennan and Marshall took toward preparing the nation for the long struggle with communism, Fallows points out that current policy is short-sighted and lopsided toward showy "feel good" measures that likely make civilian less secure than more. He closes with a discussion of the problem of "loose nukes" (primarily from Russia's poorly guarded and decaying stockpile, but also potentially from countries like Pakistan who have not signed the Nuclear Non-Proliferation Treaty), and the lack of seriousness coming from Washington with regard to addressing this imminently approachable goal. Indeed, in 2002 bin Laden issued a fatwa authorizing the killing of four million Americans with half of them being children in retribution for US Middle East policy - achieving this number can only be done with something like a nuclear bomb.

My reaction to these thoughtful and well-measured articles is that they basically nail the problem with US policy on terrorism exactly. The US has not been serious about facing the changes that need to be made (the "Department of Homeland Security" is basically misnomer), and anti-terrorism funding has become a massive source of pork for congressmen. Matched with the hypocritical rhetoric of the government, and the continued US insistence on an oil-economy, we're basically significantly worse off now than we were pre-September 11th. Stealing from the popular college student adage, our current domestic security policy is like masturbation: it feels good right now, but ultimately, we're only fucking ourselves.

Ten Years Later, by Richard Clarke

Victory Without Success, James Fallows

If these weren't scathing enough, the award winning William Langewiesche writes a Letter from Baghdad concerning the depth and pervasiveness of the insurgency there. Langewiesche describes the deteriorating (that word doesn't do his account justice - "anarchic" is more apt) security situation there as having reached the point that a continued US presence will indeed only intensify the now endemic guerrilla warfare. An interesting contrast is between the Iraqi resistance and, for instance, the French Resistance of World War II. I strongly suspect, that ultimately, the US does not have the stomach to truly break the resistance, as that would essentially require using the same draconian measures that Saddam used to install the Baathist regime. Depressing, indeed.

posted January 13, 2005 03:24 PM in Political Wonk | permalink | Comments (0)