|
![]() |
|
David H. AckleyI am an associate professor of Computer Science at the University of New Mexico, with degrees from Tufts and Carnegie Mellon. Over twenty-five years my work has involved neural networks and machine learning, evolutionary algorithms and artificial life, and biological approaches to security, architecture, and models of computation. |
I have always worked with smart, creative people. I am drawn toward robust computational architectures, distributed and bottom-up control, adaptation and evolution, and randomness. Among my project contributions, these are more recent, higher impact, or just personal favorites.
| X | |
Robust-first computing |
Image source (SVG) |
Next to the wary toughness of living creatures, modern mass market computers are shamefully brittle and frighteningly insecure. We blame clueless users and lazy programmers. We blame tech companies, terrorist countries, and computer criminals. We must also blame the decades we in computer science have spent optimizing efficiency at all costs.
For the exploding population of computers in the wild—in our cars, smartphones, medical equipment—we need to elevate robustness to be their primary design criterion. This impacts the entire computational stack.
Because efficiency costs robustness.
These ideas are explored in a short, easy-to-read paper, "Beyond Efficiency", appearing as a Viewpoint in the October, 2013 issue (subscription required) of The Communications of the ACM. Sorting is used to exemplify the tradeoffs between efficiency and robustness, and a much-maligned algorithm finds a niche.
Beyond Efficiency, BibTex, author's preprint: PDF.
A 15 minute intro to robust-first computing:
See also An infinite computer
| X | |
Peer-to-peer social networking1989–2002 |
Image source (PDF) |
Facebook connects people to each other, but only indirectly. All your information flows first into their central servers, raising issues of data ownership and resale, aggregation and privacy, retention and control. Most social networks today work the same way, though there are ever more exceptions.
The ccr project, started at Bellcore in 1989, and continued at UNM starting in 1994, explored peer-to-peer social networking. Each ccr user—each peer—is the omnipotent 'god' of their own world, but a mere 'mortal' when visiting the worlds of others. The power dynamic is deeply symmetric. Data centralization is minimized.
Among the mechanisms developed and deployed in ccr: Cryptography and public keys aided communications privacy and peer authentication, adaptive resource management mitigated channel and processor flooding attacks, a subject-oriented type system allowed remote code execution with credible safety, and an end-user programming language allowed the development of custom 'laws of physics'.
A 1997 paper offers an overview of ccr: PDF, BibTex.
A 2000 paper (later cited by The Economist), uses ccr in examples: HTML, PDF, BibTex.
Regrettably, many lessons learned from ccr have yet to be presented in the literature, but they were pivotal in shaping my subsequent work in computer security (see highlight) and architecture (see highlight).
Many people participated in ccr over the years, as developers and interface designers, and as gods of and mortals in the distributed ccr universe.
Excerpts from the ccr object hierarchy
The ccr key server live status page
| X | |
Ackley functions1985–1987 |
'Ackley Function' Image source (Amazon) |
Summary yet to be written
My 1987 Ph.D. dissertation, published as A connectionist machine for genetic hillclimbing: Amazon, CMU thesis archive, BibTex, Cited by 702
As a test problem for optimization, the original 'Ackley function' illustrated above has evolved many variants: Google images
| X | |
Laws for gods |
Image source (PDF) |
Life is a self-repairing, space-filling, programmable computing system. For that reason, artificial life principles and software will be central to future large-scale computer architectures. But today, most software 'alife' models are siloed in their own unique computational universes, hampering engineering interoperability and scientific generalization.
A common 'virtual physics' would offer tremendous leverage for combining alife models and mechanisms, but as a community we are far from a consensus on what essential properties it should have.
The principle of indefinite scalability is a candidate for such an essential property. That and related ideas are explored in the paper "Bespoke physics for living technology", appearing in the Summer/Fall 2013 issue of the journal Artificial Life.
Bespoke physics for living technology, open access: Abstract, PDF, HTML, BibTex.
A 14 minute intro to alife:
See also An infinite computer
| X | |
Evolution of feelings1989–1993 |
Image source (PDF) |
Some things feel good and some things feel bad. Those feelings are our first and most intimate teachers. But where do they come from?
When natural selection kills the weak, stupid, isolated, or unlucky, the deceased can't learn anything useful from it. But evolution can bequeath us with 'internal teachers' that generate reinforcement signals—feelings about the things we experience—for us to learn from throughout our lives. Death becomes a referendum on those inborn feelings as well as what we learned from them, and did about them.
Evolutionary Reinforcement Learning (ERL) is a combined evolution and learning algorithm that Michael Littman and I developed to explore such mechanisms. In a simulated artificial life world, we showed that ERL was advantageous in the long run, while observing unexpected 'perversities' such as cannibalism and prey that loved predators.
ERL is the best-known of several evolutionary algorithms Michael and I developed together. It is described in the 1991 paper Interactions between learning and evolution: PDF, BibTex, Cited by 495
|
A 1990 video demo of ERL is fun to watch, if one is into this sort of thing, but hard to find now. |
Image source (Amazon) |
| X | |
Machines having ideas1984–1986 |
Image source (PDF) |
Summary yet to be written
The Boltzmann Machine proved to be one major component of the renaissance in neural networks that started in the 1980's. The key ideas were all Geoff Hinton's and Terry Sejnowski's. In addition to suggesting the term 'bias' for the negative of a threshold, my principal contributions were lots of simulator hacking, data gathering, and text writing for the 1985 paper A learning algorithm for Boltzmann Machines: PDF, BibTex, Cited by 2167
| X | |
An infinite computer |
Image source (PDF) |
Over the last 70 years, ever more powerful computers have revolutionized the world, but their common architectural assumptions—of CPU and RAM, and deterministic program execution—are now all hindrances to continued computational growth.
The common communications assumptions—of fixed-width addresses and globally unique node names—are likewise only finitely scalable.
Seriously scalable computing requires a robust spatial computer. Resilience, survivability, and graceful degradation must be inherent not just in the hardware but upwards throughout the computational stack. Low-level communications and naming must be based on relative spatial addressing.
The Movable Feast Machine (MFM) is a robust indefinitely scalable computer architecture we are using to explore such issues.
A short 2011 paper, written with Daniel C. Cannon, won the "Most Outrageous Opinion" prize at the Hot Topics in Operating Systems-XIII workshop: Abstract, PDF, BibTex.
A longer 2012 paper, with coauthors Daniel C. Cannon and Lance R. Williams, in The Computer Journal: Abstract, PDF, BibTex.
A 12 minute demo:
See also Robust-first computing
| X | |
Diversity for security1995–2006 |
Quote source (PDF) |
Stephanie Forrest and I found common ground applying biological principles to computer security, and worked together on several projects with her students and others. One recurring theme was deliberately engineering diversity into computing systems to increase their resistance to attack.
At first that idea was a difficult sell. At least in small ways, all our suggested diversifications reduced efficiency. And they operated in areas—such as compiler and operating system design, and ultimately even machine instruction sets—where traditionally efficiency was king.
Today, through many people's efforts, engineered diversity is commonplace. Address space layout randomization (ASLR) increases the cost of attacking processes, as do randomized process identifiers; sequence number randomization similarly helps protect the TCP communications underlying internet services like the WWW.
Our 1997 paper with Anil Somayaji was one of the earliest to propose deliberate program randomization to enhance security: PDF, Preprint PDF, BibTex, Cited by 339
We first published the idea of instruction set randomization in 2003, reporting the work of Elena Gabriela Barrantes and several coauthors; a team at Columbia independently reported similar work at the same conference. Randomized instruction set emulation to disrupt binary code injection attacks: PDF, BibTex, Cited by 303
See also Robust-first computing