Computer Immune Systems
Intrusion Detection Project

We are developing an intrusion-detection system for networked computers. Discrimination between normal and abnorml behavior must be based on some characteristic structure that is both compact and universal in the protected system. Further, it must be sensitive to most undesirable behavior. Most earlier work on intrusion detection monitors the behavior of individual users, but we have decided to concentrate on system-level processes. We define self in terms of short sequences of system calls executed by privileged processes in a networked operating system. Preliminary experiments on a limited testbed of intrusions and other anomalous behavior show that short sequences of system calls (currently sequences of length 6) provide a compact signature for self that distinguishes normal from abnormal behavior.

The strategy for our intrusion-detection system is to collect a database of normal behavior for each program of interest. Each database is specific to a particular architecture, software version and configuration, local administrative policies, and usage patterns. Once a stable database is constructed for a given program in a particular environment, the database can then be used to monitor the program's behavior. The sequences of system calls form the set of normal patterns for the database, and sequences not found in the database indicate anomalies. In the first stage, we collect samples of normal behavior and build up a database of characteristic normal patterns (observed sequences of system calls). Parameters to system calls are ignored, and we trace forked subprocesses individually. In the second stage, we scan traces that might contain abnormal behavior, matching the trace against the patterns stored in the database. If a pattern is seen that does not occur in the normal database, it is recorded as a mismatch. In our current implementation, tracing and analysis are performed off-line. Mismatches are the only observable that we use to distinguish normal from abnormal. We observe the number of mismatches encountered during a test trace and aggregate the information in several ways.

Although this method does not provide a cryptographically strong or completely reliable discriminator between normal and abnormal behavior, it is much simpler than other proposed methods and could potentially provide a lightweight, real-time tool for continuously checking executing code. Another appealing feature is that code that runs frequently will be checked frequently, and code that is seldom executed will be infrequently checked. Thus, system resources are devoted to protecting the most relevant code segments. Finally, given the large variability in how individual systems are currently configured, patched, and used, we expect that databases at different sites would likely differ enough to give each protected location its own unique signature (see organizing principles). A unique signature is important for another reason---it could provide a form of identity that is much harder to falsify than, for example, an IP address.

© 1997 Steven A Hofmeyr

[ Data Sets ] [ Papers ]