January 31, 2010
Why I think the iPad is good
Since Apple announced it, I've found myself in the strange position of defending the iPad to my friends, who uniformly think it sucks. I don't think a single one of them agrees with me that the iPad is good. Most of them cite things like the name, the lack of multi-tasking, and the lack of deep customizability (i.e., programming) as reasons why it sucks. (If you want a full list of these kinds of reasons, the Huffington Post gives nine of them). I don't disagree with these complaints at all, although I'm pretty sure that some of them will be fixed later on (like the multi-tasking).
Some complaints are more thoughtful, that the iPad is a closed device (like the iPod touch / iPhone), that you can only do the things on it that Apple allows you to and that these are basically focused on consuming media (and spending money at Apple's online stores). (These points are made well by io9's review of the iPad). And, I don't disagree that this is a problem with Apple's business strategy for the iPad, and that it will limit its appeal among more serious computer users.
But, I think all of these complaints miss the point of what is good about the iPad.
The iPad is good because it will push the common experience of computing more toward how we interact with every other device / object in the world, i.e., pushing, pulling, prodding and poking them, and this is the future. (Imagine programming a computer using a visual programming language, rather than using an arcane character-based syntax we currently use; I don't know if it would be genuinely better, but I'd sure like to find out.) One thing that sucks about how current computers are designed is how baroque their interfaces are. Getting them to do even simple things requires learning complex sequences of actions using complicated indirect interfaces. By making the mode of interaction more direct, devices like the iPad (even with all of its flaws) will make many kinds of simple interactions with computing devices easier, and that's a good thing.
I think the iPad is disappointing to many techy people because they wanted it to completely replace their current laptop. They wanted a device that would do everything they can do now, but using a cool multi-touch interface. (To be honest, it's not even clear that this is possible.) But I think Apple knows that these people are not the target audience for the iPad. The people who will buy and love the iPad are your parents and your children. These are people who primarily want a casual computing device (for things like online shopping, reading the news and gossip sites, listening to music, watching tv/movies, reading email, etc.), who don't care too much about hacking their computers, and who don't mind playing inside Apple's closed world (which more and more of us do anyway; think iTunes).
If things go the way I think they will, in 20 years, the kids I'll be teaching at CU Boulder will have had their first experience with computers on something like an iPad, and they're going to expect all "real" computers to be as physically intuitive as it is. They're going to hate keyboards and mice (which will go the way of standard transmissions in cars), and they're going to think current laptops are "clunky". They'll also know that serious computing activities require a serious computer (something more customizable and programmable than an iPad). But most people don't do or care about serious computing activities, and I think Apple knows this.
So, I think most of the criticism of the iPad is sour grapes (by techy people who misunderstand how Apple is targeting with the iPad and, more fundamentally, what Apple has done to the future of human-computer interaction, which is going to be dominated by multi-touch interfaces like the iPad's). I hope the iPad is successful because I want interacting with computers to suck less. Of course, I also want it to run multiple apps, have a camera for video-conferencing, use open standards and file formats, do handwriting recognition, and generally replace my laptop. These things will come, I think, but to become real, they need a device like the iPad to call home.
January 25, 2010
On the frequency of severe terrorist attacks; redux
Sticking with the theme of terrorism, in the new issue of the Journal of Conflict Resolution is an article by Frits Wiegel and me , in which we analyze, generalize, and discuss the Johnson et al. model on the internal dynamics of terrorist / insurgent groups .
The goal of the paper was to (i) relax one of the mathematical assumptions Johnson et al. initially made when they introduced the model back in 2005, (ii) clearly articulate its assumptions and discuss the evidence for and against them, (iii) discuss the relevance of the model for counter-terrorism / counter-insurgency policies, and (iv) identify ways the model's assumptions and predictions could be tested using empirical data.
Here's the abstract:
We present and analyze a model of the frequency of severe terrorist attacks, which generalizes the recently proposed model of Johnson et al. This model, which is based on the notion of self-organized criticality and which describes how terrorist cells might aggregate and disintegrate over time, predicts that the distribution of attack severities should follow a power-law form with an exponent of alpha=5/2. This prediction is in good agreement with current empirical estimates for terrorist attacks worldwide, which give alpha=2.4 \pm 0.2, and which we show is independent of certain details of the model. We close by discussing the utility of this model for understanding terrorism and the behavior of terrorist organizations, and mention several productive ways it could be extended mathematically or tested empirically.
Looking forward, this paper is really just a teaser. There's still a tremendous amount of work left to do both in terms of identifying other robust patterns in global terrorism and in terms of explaining where those patterns come from. The hardest part of this line of research promises to be reconciling traditional economics-style assumptions in conflict research (that terrorists are perfectly rational actors: their actions are highly strategic, are best explained using game theory, and are otherwise contingent on the particular local history and politics) with these newer physics-style assumptions (that terrorists are highly irrational "dumb" actors: their actions blindly follow fundamental "laws", are best explained using simple mechanical models, and are otherwise random). The truth is almost surely a compromise between these two extremes, one that includes both local strategic flexibility and contingency, along with fundamental constraints created by the "physics" of planning and carrying out terrorist attacks. Developing a theory that captures the right amount of both approaches seems hard, but exciting.
 A. Clauset and F. W. Wiegel. "A generalized aggregation-disintegration model for the frequency of severe terrorist attacks." Journal of Conflict Resolution 54(1): 179-197 (2010). (arxiv version)
 This model should now perhaps be called the Bohorquez et al. model, since that's the author order for their published version, which appeared last month in Nature. See also the accompanying commentary in Nature, in which I'm quoted.
January 12, 2010
The future of terrorism
Here's one more thing. SFI invited me to give a public lecture as part of their 2010 lecture series. These talks are open to, and intended for, the public. They're done once a month, in Santa Fe NM over most of the year. This year, the schedule is pretty impressive. For instance, on March 16, Daniel Dennett will be giving a talk about the evolution of religion.
My own lecture, which I hope will be good, will be on June 16th:
One hundred sixty-eight people died in the Oklahoma City bombing of 1995, 202 people died in the 2002 nightclub fire in Bali, and at least 2749 people died in the 9/11 attacks on the World Trade Center Towers. Such devastating events captivate and terrify us mainly because they seem random and senseless. This kind of unfocused fear is precisely terrorism's purpose. But, like natural disasters, terrorism is not inexplicable: it follows patterns, it can be understood, and in some ways it can be forecasted. Clauset explores what a scientific approach can teach us about the future of modern terrorism by studying its patterns and trends over the past 50 years. He reveals surprising regularities that can help us understand the likelihood of future attacks, the differences between secular and religious terrorism, how terrorist groups live and die, and whether terrorism overall is getting worse.
Also, if you're interested in my work on terrorism, there's now a video online of a talk I gave on their group dynamics last summer in Zurich.
Workshop: Nonlinear Dynamics of Networks
And finally, here's one more upcoming networks workshop. This one, I expect will be really good, in spite of the fact that I'll be presenting.
Date & Location: 5-9 April, 2010, in College Park, MD
Organizers: Michelle Girvan (UMD), Ed Ott (UMD), Raj Roy (UMD) and Eitan Tadmor (UMD)
Description: The interconnection of many dynamical units to form a complex system can lead to unexpected collective behavior. This dynamics depends upon both the individual characteristics of the participating units, as well as the topological character and properties of the network of their connections. This workshop will focus on gaining understanding of general principles and techniques of analysis that will be of broad use in the many applications where networked system dynamics is a significant issue. Another aim of the workshop will be to highlight particularly important examples of applications where the issue of network dynamics arises.
Understanding the dynamics of networked systems is becoming an increasingly important and essential component in many areas of science and technology. Examples include social networks, communication and computer networks, gene networks, networks of neurons, etc. Dynamics on such networks include such problems as synchronization of temporal behavior of units composing a network, robustness of function to network damage (either intended or unintended), etc. The dynamics of networks themselves (i.e., change of network topological structure with time) is also an essential issue in many cases. Examples of issues in this area include adaptive evolution of network topology, formation and growth of networks, etc.
It is intended that all of the above, as well as related issues, will be open for discussion at this workshop. The two overarching goals of the workshop will be
• To contribute to the understanding of common, basic principles of network dynamics, and
• To uncover useful general analysis techniques for the study of these systems
Conference: NetSci 2010
While I'm at it, it looks like NetSci will happen again this year, and this year it returns to the US.
Date & Location: May 12-14, 2010, in Boston, USA
Organizers: Marta C. González (MIT), César A. Hidalgo (Harvard), Ginestra Bianconi (Northeastern) and Albert-László Barabási (Northeastern)
Submission Deadline: Feb 26, 2010
Description: Bringing together leading researchers, practitioners, and teachers in network science (including analysts, modelling experts, visualisation specialists, and others), NetSci fosters interdisciplinary communication and collaboration. The conference focuses on novel directions in networks research within the biological and environmental sciences, computer and information sciences, social sciences, finance and business.
The School part of the event (10-11 May) offers a series of tutorials and lectures, introducing tools and basic results from a variety of research areas of major interest for the study of complex networks. The School presents basic experimental and theoretical developments, as well as educate the research community on standard network databases, tools, and computational resources.
The Conference part of the event (12-14 May) is dedicated to talks presenting the latest results in the field and their applications in various disciplines.
Workshop: CompleNet 2010
I'm also on the Program Committee for CompleNet 2010, a workshop / conference on complex networks. It was in Europe last year, although I didn't actually attend. If the composition of the PC is any basis for judging, it's a very internationally-oriented workshop.
CompleNET 2010 Workshop
Date & Location: October 13-15, 2010, in Rio de Janeiro, Brazil
Organizers: Giuseppe Mangioni (University of Catania), Ronaldo Menezes (Florida Tech.), Vincenzo Nicosia (University of Catania)
Submission Deadline: May 31, 2010
Description: This international workshop on complex networks (CompleNET 2010) aims at bringing together researchers and practitioners working on areas related to complex networks. In the past two decades we have been witnessing an exponential increase on the number of publications in this field. From biological systems to computer science, from economic to social systems, complex networks are becoming pervasive in many fields of science. It is this interdisciplinary nature of complex networks that this workshop aims at addressing.
Authors are encouraged to submit previously unpublished papers on their research in complex networks. Both theoretical and applied papers are of interest. Specific topics of interest are (but not limited to):
* Models of Complex Networks
* Structural Network Properties and Analysis
* Complex Network in Technology
* Complex Networks in Biological Systems
* Social Networks
* Search in Complex Networks
* Emergence in Complex Networks
* Complex Networks and Computer Epidemics
* Rumor Spreading
* Community Structure in Networks
* Link Analysis and Ranking
* Geometry in Complex Networks
January 08, 2010
Facebook Fellowships 2010
These sound like a great opportunity for folks doing doctoral work on complex networks, and related topics. Facebook says they're only for this school year, but I suspect that if they get some good applications, and if the people who get the fellowships do good work, that they'll do this again next year.
Every day Facebook confronts among the most complex technical problems and we believe that close relationships with the academy will enable us to address many of these problems at a fundamental level and solve them. As part of our ongoing commitment to academic relations, we are pleased to announce the creation of a Facebook Fellowship program to support graduate students in the 2010-2011 school year.
We are interested in a wide range of academic topics, including the following topical areas:
Internet Economics: auction theory and algorithmic game theory relevant to online advertising auctions.
•Cloud Computing: storage, databases, and optimization for computing in a massively distributed environment.
•Social Computing: models, algorithms and systems around social networks, social media, social search and collaborative environments.
•Data Mining and Machine Learning: learning algorithms, feature generation, and evaluation methods to produce effective online and offline models of behavioral signals.
•Systems: Hardware, operating system, runtime, and language support for fast, scalable, efficient data centers.
•Information Retrieval: search algorithms, information extraction, question answering, cross-lingual retrieval and multimedia retrieval
•Full-time Ph.D. students in topical areas represented by these fellowships who are currently involved in on-going research.
•Students must be in Computer Science, Computer Engineering, Electrical Engineering, System Architecture, or a related area.
•Students must be enrolled during the academic year that the Fellowship is awarded.
•Students must be nominated by a faculty member.
For more information (include the details of what's required to apply and how much cash it's worth), check out Facebook's Fellowship page. Application deadline is February 15th.
Tip to Barbara Kimbell.
January 07, 2010
This household item is, on average, 2.7 years old.
Yesterday on one of the morning radio shows that Lisa and I listen to on the way to work, this puzzle was posed: What household item is, on average, 2.7 years old? 
I thought it might be some kind of semi-durable good, like a TV. But, I was wrong. The answer was salad dressing. Yuck! 
Oh, and happy new year! Since my first entry of 2010 is about nutrition (kind of), let me end on a positive note by recommending Michael Pollan's new bite-sized book "Food Rules: An Eater's Manual", which he recently promoted on The Daily Show:
|The Daily Show With Jon Stewart||Mon - Thurs 11p / 10c|
 It's not clear to me that this isn't some kind of urban legend (or even just a made up fact), but it does seem superficially plausible... If we take it at face value, a corollary would be that many people have 5 year old salad dressing in their fridge, and some salad dressing is even older (which would be necessary to balance the the probably more numerous bottles that are less than 2.7 years old; some people, after all, actually use salad dressing regularly). An alternative explanation, though, would be that the salad dressing you buy at the grocery store is already pretty old when you buy it.