Email:ackley@cs.unm.edu Try subject 'Cold contact' if I don't know you
Office:FEC3200Office hours: No regular office hours; send email
Mastodon:@livcomp

(revisions)
Not currently teaching classes

Office hours: No regular office hours; send email

David H. Ackley

I do research, development, and advocacy of robust-first and best-effort computing on indefinitely scalable computer architectures. As of August 2018 I am an emeritus professor of Computer Science at the University of New Mexico. My academic degrees are from Tufts and Carnegie Mellon. Prior work has involved neural networks and machine learning, evolutionary algorithms and artificial life, and biological approaches to security, architecture, and models of computation.

Results highlights

I have always worked with smart, creative people. I am drawn toward robust computational architectures, distributed and bottom-up control, adaptation and evolution, and randomness. Among my project contributions, these are more recent, higher impact, or just personal favorites.


X

A Computer of Unlimited Size

2009–present
PDF, BibTex

Overview diagram of the Movable Feast Machine
MFM Overview
Image source (PDF)

Over the last 70 years, ever more powerful computers have revolutionized the world, but their common architectural assumptions—of CPU and RAM, and deterministic program execution—are now all hindrances to continued computational growth.

Many common networking assumptions—such as fixed-width addresses and globally unique node names—are likewise only finitely scalable.

Seriously scalable computing requires a robust spatial computer. Resilience, survivability, and graceful degradation must be inherent not just in the hardware but upwards throughout the computational stack. Low-level communications and naming must be based on relative spatial addressing.

The Movable Feast Machine (MFM) is a robust indefinitely scalable computer architecture we are using to explore such issues.


A short 2011 paper, written with Daniel C. Cannon, won the "Most Outrageous Opinion" prize at the Hot Topics in Operating Systems-XIII workshop: Abstract, PDF, BibTex.

A longer paper (2013, early access 2012), with coauthors Daniel C. Cannon and Lance R. Williams, in The Computer Journal: Abstract, PDF, BibTex.

A 12 minute demo:

Project home page

See also Robust-first computing

X

Peer-to-Peer Social Networking

1989–2002

ccrTk screen capture, March 28, 1996
ccrTk screen capture, March 28, 1996
Image source (PDF)

Facebook connects people to each other, but only indirectly. All your information flows first into their central servers, raising issues of data ownership and resale, aggregation and privacy, retention and control. Most social networks today work the same way, though there are ever more exceptions.

The ccr project, started at Bellcore in 1989, and continued at UNM starting in 1994, explored peer-to-peer social networking. Each ccr user—each peer—is the omnipotent 'god' of their own world, but a mere 'mortal' when visiting the worlds of others. The power dynamic is deeply symmetric. Data centralization is minimized.

Among the mechanisms developed and deployed in ccr: Cryptography and public keys aided communications privacy and peer authentication, adaptive resource management mitigated channel and processor flooding attacks, a subject-oriented type system allowed remote code execution with credible safety, and an end-user programming language allowed the development of custom 'laws of physics'.


A 1997 paper offers an overview of ccr: PDF, BibTex.

A 2000 paper (later cited by The Economist), uses ccr in examples: HTML, PDF, BibTex.

Regrettably, many lessons learned from ccr have yet to be presented in the literature, but they were pivotal in shaping my subsequent work in computer security (see highlight) and architecture (see highlight).

Many people participated in ccr over the years, as developers and interface designers, and as gods of and mortals in the distributed ccr universe.

Excerpts from the ccr object hierarchy

The ccr key server live status page

X

Ackley Functions

1985–1987

Figure 1-6 from <i>A connectionist machine for genetic hillclimbing</i>
The original
'Ackley Function'
Image source (Amazon)
Impact is where it happens.

Summary yet to be written


My 1987 Ph.D. dissertation, published as A connectionist machine for genetic hillclimbing: Amazon, CMU thesis archive, BibTex, Cited by 702

As a test problem for optimization, the original 'Ackley function' illustrated above has evolved many variants: Google images

X

Artificial Life Engineering

2012–present
PDF, BibTex

Autonomous structure construction
Engineered agents construct a large structure
Image source (PDF)

Life is a self-repairing, space-filling, programmable computing system. For that reason, artificial life principles and software will be central to future large-scale computer architectures. But today, most software 'alife' models are siloed in their own unique computational universes, hampering engineering interoperability and scientific generalization.

A common 'virtual physics' would offer tremendous leverage for combining alife models and mechanisms, but as a community we are far from a consensus on what essential properties it should have.

We suggest the principle of indefinite scalability is a candidate for such an essential property. We place that argument before the artificial life research community in the paper "Indefinitely Scalable Computing = Artificial Life Engineering", presented July 31, 2014 at the ALIFE14 conference in New York City.


ALIFE14 paper: Indefinitely Scalable Computing = Artificial Life Engineering, free download: Abstract, PDF, BibTex.

A longer 2013 paper in the Artificial Life Journal: Bespoke physics for living technology, open access: (Abstract, HTML), PDF, BibTex.

A 14 minute intro to alife:

See also An infinite computer

X

Evolution of Feelings

1989–1993

Overview of the ERL architecture
Evolution makes feelings
Image source (PDF)

Some things feel good and some things feel bad. Those feelings are our first and most intimate teachers. But where do they come from?

When natural selection kills the weak, stupid, isolated, or unlucky, the deceased can't learn anything useful from it. But evolution can bequeath us with 'internal teachers' that generate reinforcement signals—feelings about the things we experience—for us to learn from throughout our lives. Death becomes a referendum on those inborn feelings as well as what we learned from them, and did about them.

Evolutionary Reinforcement Learning (ERL) is a combined evolution and learning algorithm that Michael Littman and I developed to explore such mechanisms. In a simulated artificial life world, we showed that ERL was advantageous in the long run, while observing unexpected 'perversities' such as cannibalism and prey that loved predators.


ERL is the best-known of several evolutionary algorithms Michael and I developed together. It is described in the 1991 paper Interactions between learning and evolution: PDF, BibTex, Cited by 495

A 1990 video demo of ERL is fun to watch, if one is into this sort of thing, but hard to find now.

Comparison of 'Adam's genes' and its 200+ generations
                     descendant
Some guy talking about genetically-engineered super-creatures taking over an artificial life world.
Image source (Amazon)
X

Machines Having Ideas

1984–1986

Depiction of a 4-2-4 encoder
Automatically-learned binary encoder
Image source (PDF)

Summary yet to be written


The Boltzmann Machine proved to be one major component of the renaissance in neural networks that started in the 1980's. The key ideas were all Geoff Hinton's and Terry Sejnowski's. In addition to suggesting the term 'bias' for the negative of a threshold, my principal contributions were lots of simulator hacking, data gathering, and text writing for the 1985 paper A learning algorithm for Boltzmann Machines: PDF, BibTex, Cited by 2167

X

Diversity for Security

1995–2006

Diversity for security visualization
Diversity for security
Quote source (PDF)

Stephanie Forrest and I found common ground applying biological principles to computer security, and worked together on several projects with her students and others. One recurring theme was deliberately engineering diversity into computing systems to increase their resistance to attack.

At first that idea was a difficult sell. At least in small ways, all our suggested diversifications reduced efficiency. And they operated in areas—such as compiler and operating system design, and ultimately even machine instruction sets—where traditionally efficiency was king.

Today, through many people's efforts, engineered diversity is commonplace. Address space layout randomization (ASLR) increases the cost of attacking processes, as do randomized process identifiers; sequence number randomization similarly helps protect the TCP communications underlying internet services like the WWW.


Our 1997 paper with Anil Somayaji was one of the earliest to propose deliberate program randomization to enhance security: PDF, Preprint PDF, BibTex, Cited by 339

We first published the idea of instruction set randomization in 2003, reporting the work of Elena Gabriela Barrantes and several coauthors; a team at Columbia independently reported similar work at the same conference. Randomized instruction set emulation to disrupt binary code injection attacks: PDF, BibTex, Cited by 303

See also Robust-First Computing

X

Digital Protocell Membrane

2018
PDF, BibTex

Screen grab from Open-Ended Evolution talk
C212 Digital Protocells
Image source (YouTube video)

Biological cells are the building blocks of all complex life on Earth. For robust computation in future indefinitely scalable computers, we need digital cell membranes to facilitate, limit, and regulate interactions between operational cell contents and the surrounding environment.

Using a newly-developed pattern-oriented programming language called 'SPLAT' — Spatial Programming Language, ASCII Text — we are developing a series of digital protocell membranes to support functions like cell growth and mobility, and cell splitting and fusion.

With prototypes such as the C212 membrane (illustrated), we are gaining the scientific knowledge and systems engineering wisdom to create 'digital multicellular organisms' within indefinitely scalable computing, increasing the potential for a digital 'Cambrian explosion' that stands to revolutionize manufactured computing.


A paper presented at ALIFE2018 (PDF, BibTex) introduces SPLAT and the digital process membrane we are exploring. An OEE3 paper with Elena S. Ackley (PDF preprint, video) presents analytic techniques for tracking life histories of such protocells.

A 2008-2018 project overview report (8 min + 6 min questions):

X

The Microcomputome

2016
PDF, BibTex

Image of Cyberspace as being nowhere in a blank map
Cyberspace Considered Harmful
Image source (YouTube video)

The idea of 'cyberspace' is dangerous because it implies a separation between where 'computer things' are and where we are — but with wifi and cellphones and video cameras and the coming 'Internet of Things', that is becoming ever less true. We exist in the same space as our machines.

We cannot let our physical skin be the boundary between our body and a physical world that is increasingly packed with computational entities. We must have machines on our side too, that travel and stay with us, that support us and are supported by us purely as individuals, and are beholden to none but us.

By analogy to the 'microbiome' that surrounds and perfuses our biological body, I call those machines the microcomputome, and argue they should be treated — by law and custom — as if they are part of our bodies.

The Carried Network Demarc is a simple, if radical, proposal to enable the microcomputome.


A two-page 2016 paper (PDF, BibTex) presented at the ISAL Special Session on Alife and Society introduced the carried network demarc. The term 'microcomputome' was introduced in the associated talk (YouTube video).

X

Robust-first computing

2010–present
PDF, BibTex

R-FC vs !CEO SW icons
Robust-first computing
Image source (SVG)

Next to the wary toughness of living creatures, modern mass market computers are shamefully brittle and frighteningly insecure. We blame clueless users and lazy programmers. We blame tech companies, terrorist countries, and computer criminals. We must also blame the decades we in computer science have spent optimizing efficiency at all costs.

For the exploding population of computers in the wild—in our cars, smartphones, medical equipment—we need to elevate robustness to be their primary design criterion. This impacts the entire computational stack.

Because efficiency costs robustness.

These ideas are explored in a short, easy-to-read paper, "Beyond Efficiency", appearing as a Viewpoint in the October, 2013 issue (2024: Now open access! Thanks ACM!) of The Communications of the ACM. Sorting is used to exemplify the tradeoffs between efficiency and robustness, and a much-maligned algorithm finds a niche.


2013 essay: Beyond Efficiency, BibTex, author's preprint: PDF.

2014 paper presented at FTXS 2014: Comparison criticality in Sorting Algorithms, BibTex, author's accepted version: PDF.

A 15 minute intro to robust-first computing:

See also An infinite computer