Distributed CoreWars

In the ``distributed'' version of the rules, there are multiple virtual machines, connected by a virtual network. Programs are able to copy themselves across the network to other RVMs and attempt to take over those machines as well. The object of the game is still the same, however -- be the only PROCESS GROUP remaining on any RVM or, failing that, control the most RAM across all RVMs.

The mechanics of a single RVM are the same as in a uniprocessor game, except that the RVM allows four additional instructions:

open
The open instruction allows a PROCESS to create a connection to another RVM. The precise semantics of this instruction are specified in Appendix A, Table 3, but the gist is that this instruction creates a connection from the process that executed it to a remote RVM. Once a PROCESS has successfully opened a network connection, it can write to the remote machine with the rsw instruction.
rsw
Remote store word. This instruction allows a PROCESS to store a word to the memory of the remote machine that it contacted with open. Syntactically, it's the same as the sw instruction, but rather than specifying an address on the local machine, it writes to an address on the remote machine.
rfrk
Remote fork. This instruction allows a PROCESS to create a PROCESS on the remote machine that it contacted with OPEN. Syntactically, this instruction is the same as the frk instruction, but the new PROCESS is created on the remote RVM rather than the local one.
close
Close the connection to the remote machine.

Additional details on each of these instructions is found in Appendix A. In addition, there are a number of system calls that are supported in the distributed game -- see Appendix A, Table 2 for details.

The DCoreWars version also requires a few additional rules:

Program Image Loading
In DCoreWars, there are three possible initialization modes (to be specified by the user before the game begins):
  1. All programs loaded into a single RVM. In this initialization mode, all programs are loaded into the first RVM under the rules for uniprocessor loading, and all other RVMs are initialized to empty (i.e., no PROCESS GROUPs, all memory unowned, and all memory cells initialized to HLT).
  2. Each program loaded into a different RVM. In this mode, each PROGRAM IMAGE is loaded onto a different, randomly selected RVM. All other RVMs are initialized to empty, as in the previous rule.
  3. One copy of each program loaded onto each RVM. In this mode, every RVM is initialized with all of the PROGRAM IMAGES, according to the rules for uniprocessor loading.
PROCESS Initialization
PROCESSes are initialized as in UCoreWars, but DCoreWars processes additionally have a SOCKET to support network communications. All SOCKETs are initialized to closed (i.e., no network connection when the PROCESS is created).
Round Robin Scheduling
On each RVM, scheduling happens in the same way as on a single, uniprocessor RVM (see above). If only a single PROCESS GROUP exists on a single RVM, it clearly gets all of the cycles for that RVM.

The individual RVMs are not, however, synchronous. Each one can have its own CPU speed, different from all other RVMs and each RVM can (and most likely will) execute its instructions at different literal times.

In addition to the normal, uniprocessor execution cycle, each cycle in the distributed CoreWars game also includes a network data loading phase. See below, under Network Communications for details.

Network Initialization
Initially, no connections are open between any machines.
Multithreading
The asynchronous scheduling of the multiple RVMs in a DCoreWars game is supported by a multithreaded architecture. Each RVM is associated with a thread that handles its execution and decides when its cycle occurs, based on that RVM's CPU speed.
Network Communications
When an RVM sends a word of memory to another RVM (via the rsw instruction), that word is queued in an incoming data buffer for the receiving RVM. At the beginning of each of its cycles, each RVM MUST check its incoming data buffer. If there is data queued and waiting, then the RVM removes the first word from the buffer and stores it at the requested memory location. The ownership of that memory location is changed to the PGID that sent the data. This operation takes place before the execution of the instruction for the current PROCESS. Only one word can be stored in this way per cycle.

The incoming data buffer for each RVM is a limited-length buffer (specified by the Data Buffer Size parameter for that RVM). When that capacity is reached (i.e., more data has been sent to the buffer by other RVMs than removed from it by this RVM), additional data messages are dropped.

Terran Lane 2004-03-29