#!

The purpose of this blog is to share various pieces of code I find particularly interesting. Almost all posts will be heavily commented (often "literate") code. My main areas of interest are security, operating systems, programming language theory/implementation, and formal methods; so most of the posts will be related to one of these themes. Most of the posts will be my original code, but some will be pearls taken from elsewhere (always credited) with added commentary.

Thursday, March 26, 2009

A Simple Bytecode VM




/*
Bytecode virtual machines are a popular topic these days. The
popularity of high level dynamic languages makes improvements in all
things bytecode big news. Some of my favorite projects to track are

PyPy http://codespeak.net/pypy/
Tamarin Tracing https://wiki.mozilla.org/Tamarin:Tracing
and SquirrelFish http://trac.webkit.org/wiki/SquirrelFish

I think the coolest area of research right now is in JIT (just in
time) compilers. But first, let's write an interpreter.

This post is in good old fashioned C. The complete source code for
the interpreter, an assembler, a pretty hackish compiler, and some
examples can be downloaded from
http://aaron.13th-floor.org/src/bytecode.tgz
*/


/* First we include a number of standard C headers. Nothing
exciting */

#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>

/*
The file interpreter.h defines all the opcodes, and the basic types
used by the interpreter. The types are: integers, pointers, vm
constants, and language constants.

All types used by the VM fit in a 32 bit word (throughout the code
we assume native word size is 32 bits. porting to 64 bits is left
as an exercise to the reader), the top two bits are reserved for
flags used to identify the type of the word.

Pointers are are packed to include a 15 bit size field, and a 15
bit location pointer (the total memory size is also defined in
interpreter.h to be 8192).

VM constants include instructions, characters, and the boolean
values true and false.

Languages are free to define up to 2^30 new constants that will be
treated as opaque blobs by the machine (only an equality operation is
defined).
*/


#include "interpreter.h"

/*
Our interpreter will be a basic stack machine. Memory consists of
a large array with the stack at the top growing down, the static
code is at the bottom, then two heaps (one live, and one dead used
for garbage collection). The heap grows monotonically upward until
it is about to collide either with the other heap, or the reserved
stack space (depending on which heap is live) at which point the
garbage collector is invoked.
*/


/*
Sidenote: if you download the tarball from my website you'll notice
a number of different definitions of the following macros that can
be selected using various preprocessor defines. These include
versions that do additional bounds checking on stack
access/allocation, and dumping VM status after each instruction.
For the sake of brevity (well, relative brevity) I'm only including
the basic versions in this post.
*/


/*
Access the element at offset x from the top of the stack. (0 is
top).
*/

#define STACK(x) *(sp+1+(x))

/*
Popping an element from the stack. Note that since changes the
stack pointer, it is not safe to call this macro in an expression
that accesses the current sp.
*/

#define STACK_POP() *( ++sp )

/*
Pushing an element on the stack. Like popping this changes the
stack pointer so use with care.
*/

#define STACK_PUSH(x) do{ \
long __tmp = x; \
*sp = __tmp; \
sp--; \
}while(0)
#define STACK_HEIGHT() ((memory + MEM_SIZE-1) - sp)


/*
The following macro is used to dump the current machine state (pc,
heap pointer, stack pointer, and stack contents) to an output
stream. It's mostly intended for debugging purposes and bailing
out on type errors.
*/

#define DO_DUMP(stream) do{ \
int q=0; \
fprintf(stream, "pc: %d, hp: %d sp: %p height: %d\nstack:\n", \
pc-memory, hp-memory, sp, STACK_HEIGHT()); \
while(q < STACK_HEIGHT() ){ \
switch(CELL_TYPE(STACK(q))){ \
case NUM: \
fprintf(stream, "\t%d\n", NUM_TO_NATIVE(STACK(q))); \
break; \
case VCONST: \
if(IS_CHAR(STACK(q))){ \
fprintf(stream,"\tchar: %c\n", CHAR_TO_NATIVE(STACK(q))); \
}else if(IS_BOOL(STACK(q))){ \
fprintf(stream, "\tbool: %c\n", \
STACK(q) == TRUE_VAL ? 'T' : 'F'); \
}else{ \
fprintf(stream, "\tvc: %08lx\n", STACK(q)); \
} \
break; \
case LCONST: \
fprintf(stream, "\tlc: %d\n", NUM_TO_NATIVE(STACK(q))); \
break; \
case PTR: \
fprintf(stream, "\tptr: %d sz: %d\n", \
PTR_TARGET(STACK(q)), PTR_SIZE(STACK(q))); \
break; \
default: \
fprintf(stream, "\t(%1x)?: %08lx\n", \
CELL_TYPE(STACK(q)), STACK(q) & (~(0x3 << 30))); \
} \
q++; \
} \
}while(0)

/*
This is where life starts to really get interesting. The
interpreter will use what's called an indirect threaded dispatching
model. The idea is that each opcode acts as an index into an
array, the value in the array is the address of the native code
implementing that instruction. For a great discussion of different
instruction dispatching techniques check out David Mandelin's blog
post on SquirrelFish:
http://blog.mozilla.com/dmandelin/2008/06/03/squirrelfish/

Conveniently, the flag value used to specify a VM constant is a 0
in the top two bits of the word, this means that converting between
opcodes and their index in the array is a nop.
*/

#define IDX_FROM_INS(i) (i)

/*
The cool thing about indirect threaded dispatching is that
interpreting the next instruction is just a matter of jumping to
the addres specified in the array of opcode handlers. This is in
contrast to a traditional interpreter model that looks more like a
case statement embedded in a big while loop. The indirect threaded
approach requires no comparisons figure out which handler to execute.

The NEXT macro encapsulates this dispatching paradigm. Every
instruction handler we write will end with a call to NEXT.
*/

#define NEXT goto *instructions[IDX_FROM_INS(*pc++)]

/*
As implied above, instruction handlers have to have a specific
format for everything to work nicely. We're going to use gcc's
computed goto feature to build our array, so each instruction will
start with a label naming the instruction, then some C code
implementing the handler, then a call to NEXT.
*/

#define INSTRUCTION(n,x) \
n: \
x; \
NEXT

/*
likely and unlikely are hints to GCC to help it arrange
if-then-else statements so that the common path is faster. We use
these to help optimize type assertions in the interpreter.
*/

#define unlikely(x) __builtin_expect((x),0)
#define likely(x) __builtin_expect((x),1)

/*
Now come a series of macros for checking and unpacking the
different VM types. The corresponding packers are defined in the
interpreter.h header file.
*/

#define PTR_TARGET(x) (((unsigned int)x) & 0x7fff)
#define PTR_SIZE(x) ((((unsigned int)x) >> 15) & 0x7ff)
#define NUM_TO_NATIVE(x) (((x) << 2) >> 2)
#define CHAR_TO_NATIVE(x) ((char)((x) & 0xff))
#define CELL_TYPE(x) (((unsigned int)x) & (0x3 << 30))
#define IS_NUM(x) (CELL_TYPE(x) == NUM)
#define IS_PTR(x) (CELL_TYPE(x) == PTR)
#define IS_LCONST(x) (CELL_TYPE(x) == LCONST)
#define IS_VCONST(x) (CELL_TYPE(x) == VCONST)
#define IS_CONST(x) (IS_LCONST(x) || IS_VCONST(x))
#define IS_BOOL(x) (x == TRUE_VAL || x == FALSE_VAL)
#define IS_CHAR(x) (IS_VCONST(x) && ((x) & CHAR_FLAG) == CHAR_FLAG)
#define IS_INS(x) (IS_VCONST(x) && ( (x) < NR_INS))

#define ASSERT_TYPE(x,t) if(unlikely(CELL_TYPE(x) != t)){TYPE_ERROR(t);}
#define ASSERT_NOT_TYPE(x,t) if(unlikely(CELL_TYPE(x) == t)){TYPE_ERROR(!t);}

/*
HEAP_CHECK is called before allocations to ensure that we don't
overflow. If the allocation is too big, we invoke the garbage
collector. If collecting garbage doesn't free up enough memory to
do the allocation, we crash.
*/

#define HEAP_CHECK(n) do{ \
long __tmp = n; \
if(hp + __tmp >= heap_base + heap_size){ \
heap_base = (heap_base == prog_end ? \
upper_heap : prog_end); \
gc(heap_base); \
if(hp + __tmp >= heap_base + heap_size){ \
fprintf(stderr, "Unable to allocate memory!\n"); \
exit(-ENOMEM); \
} \
} \
}while(0);

/*
When a type assertion fails we let the user know and then give a
backtrace. The error message could be better, it prints out useful
type names like (2 << 30) instead of "PTR". But something is
better than nothing. Hopefully, we never see these anyway.
*/

#define TYPE_ERROR(t) do{ \
fprintf(stderr, "Type error. Expected "#t"\n"); \
DO_DUMP(stderr); \
exit(-1); \
}while(0);

static long memory[MEM_SIZE]; /* all of memory */

static long *sp = &memory[MEM_SIZE-1]; /* The top 256 words of memory
are reserved for the stack */


static long *prog_end; /* pointer to the end of the
loaded program text */


static long *pc = memory; /* the program counter */

static long heap_size; /* maximum size of a heap
(computed based on
prog_end) */


static long *upper_heap; /* upper_heap == prog_end + heap_size; */

static long *heap_base; /* pointer to the first cell of
the heap, Should always be
either prog_end, or
upper_head. */


static long *hp; /* the heap pointer. */

static long rr = MAKE_NUM(0); /* the root regiseter can be
used by the language to
store an additional root for
gc (anything on the stack is
also treated as a root).
The intended use is as a
pointer to a structure
containing local variables. */


/*
The garbage collector is a basic stop and copy collector. It
doesn't really try to do anything clever, and (temporarily) uses a
rather obsene amount of memory (twice MEM_SIZE!).

The new_heap argument is either upper_heap or prog_end, whichever
is not currently active.
*/

void gc(long *new_heap)
{
long mappings[MEM_SIZE];
long worklist[MEM_SIZE];
long *work = worklist;
int i;
long tmp, val;
memset(mappings, 0xffffffff, MEM_SIZE*sizeof(long));

hp = new_heap;

/* here's the deal,
worklist is a stack of tree branches we need
to walk
mappings is a map from the old ptr target (offset in
the memory array) to a pointer in the new heap
to which the data must be copied.

The way this works is whenever we encounter a pointer
we check two things:
- is it's entry in the mappings array equal to
0xffffffff, if so then we have not yet allocated
space for it in the new heap.

- is the target of the pointer inside the original
program code? if so, we do not copy it. Instead
we just set the entry in mappings to point to the
original target. However, we must still add this
cell to the worklist so that we walk its children.

If a cell must be copied we first set up its reverse
mapping based on the current heap pointer, increment
the heap pointer by the size of the pointer target,
then add the original pointer to the worklist.

We initialize the worklist by looking at the root
register and the stack.

We are finished when the worklist is empty.

We start by allocating space in the new heap for the target of
the root register (if it contains a pointer), and adding it to
the work list.
*/

if(CELL_TYPE(rr) == PTR){
*work = rr;
work++;
if(&memory[PTR_TARGET(rr)] >= prog_end){
rr = mappings[PTR_TARGET(rr)] = MAKE_PTR(hp-memory, PTR_SIZE(rr));
hp += PTR_SIZE(rr);
}else{
mappings[PTR_TARGET(rr)] = rr;
}
}

/*
Now we scan through the stack and do essentially the same thing we
did for the root register.
*/

for(i=0;i<STACK_HEIGHT();i++){
if(CELL_TYPE(STACK(i)) == PTR){
if(mappings[PTR_TARGET(STACK(i))] == 0xffffffff){
/* We haven't looked at this pointer target before. So add it
to the work queue and (if needed) allocate space. */

*work = STACK(i);
work++;
if(&memory[PTR_TARGET(STACK(i))] >= prog_end){
STACK(i) = mappings[PTR_TARGET(STACK(i))] = MAKE_PTR(hp-memory, PTR_SIZE(STACK(i)));
hp += PTR_SIZE(STACK(i));
}else{
mappings[PTR_TARGET(STACK(i))] = STACK(i);
}
}else{
/* We've already handled another reference to the same target,
so we can just update the value on the stack */

STACK(i) = mappings[PTR_TARGET(STACK(i))];
}
}
}

/*
ok, the work list has been initialized, we now walk down the list
copying the contents of each targeted cell as we go. If we find a
pointer, we perform the operations described above (allocation in
new heap, initializing the mapping, and adding to the worklist)
then write the new pointer-value into the target cell.
*/

while(work > worklist){
work--;
tmp = *work;

if(PTR_SIZE(tmp) == 0xff) continue;

for(i=0;i<PTR_SIZE(tmp);i++){
val = memory[PTR_TARGET(tmp)+i];
if(CELL_TYPE(val) == PTR){
if(mappings[PTR_TARGET(val)] == 0xffffffff){
/* the cell is a pointer we have not yet created a mapping
for. That means we must add it to the work list, and if
it is above the program text, allocate space in the new
heap */

*work = val;
work++;
if(&memory[PTR_TARGET(val)] >= prog_end){
/* set the value of the cell we're copying to be the new pointer */
memory[PTR_TARGET(mappings[PTR_TARGET(tmp)]) + i] =
mappings[PTR_TARGET(val)] = MAKE_PTR(hp - memory, PTR_SIZE(val));
/* update the heap pointer */
hp += PTR_SIZE(val);
}else{
/* the target of this pointer is in the program text just
perform a straight copy and set the value for this cell in
the mappings to the identity map */

memory[PTR_TARGET(mappings[PTR_TARGET(tmp)]) + i] = mappings[PTR_TARGET(val)] = val;
}
}else{
/* this is a pointer, but we've seen it already.
we create a new pointer cell based on the value in the
mappings[] array */

memory[PTR_TARGET(mappings[PTR_TARGET(tmp)]) + i] = mappings[PTR_TARGET(val)];
}
}else{
/* the cell isn't a pointer, we can just copy it */
memory[PTR_TARGET(mappings[PTR_TARGET(tmp)]) + i] = val;
}
}
}

/* Now scan through the original program text and update any heap
pointers to their new location */

for(i=0;i<prog_end-memory;i++){
if(CELL_TYPE(memory[i]) == PTR && mappings[PTR_TARGET(memory[i])] != 0xffffffff){
memory[i] = mappings[PTR_TARGET(memory[i])];
}
}
}

/*
Ok, we're done with the quick and dirty garbage collector. Let's
actually implement the interpreter!
*/

int main(int argc, char *argv[])
{
/* The long awaited array of instructions! It is imperative that
these appear in the order specified in interpreter.h */

static long instructions[] = {
(long)&&PUSH, (long)&&POP,
(long)&&SWAP, (long)&&DUP,
(long)&&ROT,

/* Control flow */
(long)&&CALL,
(long)&&RET, (long)&&JMP,
(long)&&JTRUE, (long)&&END,

/* Arithmetic */
(long)&&PLUS, (long)&&MUL,
(long)&&SUB, (long)&&DIV,
(long)&&MOD, (long)&&SHL,
(long)&&SHR, (long)&&BOR,
(long)&&BAND,

/* Comparison */
(long)&&EQ, (long)&&LT,

/* Reading and writing memory */
(long)&&STOR, (long)&&LOAD, (long)&&ALOC,

/* I/0 */
(long)&&GETC, (long)&&DUMP,
(long)&&PINT, (long)&&PCHR,

/* Root Register manipulation */
(long)&&RDRR, (long)&&WTRR,

/* Type checking */
(long)&&ISNUM, (long)&&ISLCONST,
(long)&&ISPTR, (long)&&ISBOOL,
(long)&&ISCHR, (long)&&ISINS
};


/* We first do some basic startup stuff to load the program */
int fd;
long nread;
/* Read from a file or stdin */
fd = argc > 1 ? open(argv[1], O_RDONLY) : STDIN_FILENO;
if(fd < 0){
printf("Open failed\n");
exit(1);
}
/* read in as much as we can! */
nread = read(fd, memory, MEM_SIZE*sizeof(long));
if(nread < 0){
printf("Read failed\n");
exit(1);
}
/* scale to words instead of bytes */
nread /= 4;

/* Initialize the global variables pointing to the end of the text
and the heap pointer */

prog_end = heap_base = hp = memory + nread;

/* Give half the remaining memory to each heap */
heap_size = (MEM_SIZE - STACK_SIZE - nread)/2;
upper_heap = hp + heap_size;

/* The assembler (included in the tarball) outputs everything in
network byte order (my main desktop is PPC) so sweep through and
fix everything for the native machine. We could skip this on
PPC.
*/

for(; nread > 0 ; nread--){
memory[nread-1] = ntohl(memory[nread-1]);
}

/* pc points to the first cell of memory which is the first opcode
to execute. All we have to do to kick things off is call the
NEXT macro to jump to that instruction handler and increment the
pc! */

NEXT;

/* Now we just have to implement our instructions. These all use
the INSTRUCTION macro defined above. Most of them are fairly
straightforward */


/* stack manipulation: push, pop, swap, and rotate */
INSTRUCTION(PUSH, STACK_PUSH(*pc++));
INSTRUCTION(POP, STACK_POP());
INSTRUCTION(SWAP,
do{
long tmp = STACK(0);
STACK(0) = STACK(1);
STACK(1) = tmp;
}while(0)
);
INSTRUCTION(DUP, STACK_PUSH(STACK(0)));
INSTRUCTION(ROT,
do{
long tmp = STACK(0);
STACK(0) = STACK(1);
STACK(1) = STACK(2);
STACK(2) = tmp;
}while(0)
);

/* Flow control */
/* Function calls are really primitive: pop the destination address
off the stack, push the current pc on, then set the new pc. */

INSTRUCTION(CALL,
do{
long tmp = pc - memory;
ASSERT_TYPE(STACK(0), PTR);
pc = memory + PTR_TARGET(STACK(0));
STACK(0) = MAKE_PTR(tmp,0);
}while(0)
);

/* Returning is basically the opposite of calling (as you'd expect).
The exception is that the top of stack is treated as the function
return value, so we restore the pc from the second to top stack
element, then copy the top element over it and pop. */

INSTRUCTION(RET,
do{
ASSERT_TYPE(STACK(1), PTR);
pc = memory + PTR_TARGET(STACK(1));
STACK(1) = STACK(0);
STACK_POP();
}while(0)
);

/* The target of an unconditional jump is the top stack element.
All jumps are absolute. */

INSTRUCTION(JMP, ASSERT_TYPE(STACK(0), PTR);
pc = memory + PTR_TARGET(STACK_POP()));

/* Conditional jumping: top element is the destination, next element
is a boolean indicating whether or not to jump. */

INSTRUCTION(JTRUE,
do{
ASSERT_TYPE(STACK(0), PTR);
if(unlikely(!IS_BOOL(STACK(1)))){TYPE_ERROR(BOOL);}
pc = ((STACK(1) == TRUE_VAL) ?
memory + PTR_TARGET(STACK(0)) : pc);
STACK_POP();
STACK_POP();
}while(0)
);

/* END is used to stop the interpreter, just exit the main
routine. */

INSTRUCTION(END, return 0);

/* arithmetic */
/* Note that for PLUS and MUL the NUM_TO_NATIVE conversions are not
really needed since the flag bits will just overflow and get
reset. We do them anyway just for the sake of being explicit. */

INSTRUCTION(PLUS,
ASSERT_TYPE(STACK(0), NUM);
ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) +
NUM_TO_NATIVE(STACK(0)));
STACK_POP());

INSTRUCTION(MUL,
ASSERT_TYPE(STACK(0), NUM); ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) *
NUM_TO_NATIVE(STACK(0)));
STACK_POP());

INSTRUCTION(SUB,
ASSERT_TYPE(STACK(0), NUM);
ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) -
NUM_TO_NATIVE(STACK(0)));
STACK_POP());

INSTRUCTION(DIV,
ASSERT_TYPE(STACK(0), NUM);
ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) /
NUM_TO_NATIVE(STACK(0)));
STACK_POP());

INSTRUCTION(MOD,
ASSERT_TYPE(STACK(0), NUM);
ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) %
NUM_TO_NATIVE(STACK(0)));
STACK_POP());

/* Bitwise operations. */
INSTRUCTION(SHL, do{
ASSERT_TYPE(STACK(0), NUM); ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) << NUM_TO_NATIVE(STACK(0)));
STACK_POP();
}while(0));

INSTRUCTION(SHR, do{
ASSERT_TYPE(STACK(0), NUM); ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) >> NUM_TO_NATIVE(STACK(0)));
STACK_POP();
}while(0));

INSTRUCTION(BOR, do{
ASSERT_TYPE(STACK(0), NUM); ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) | NUM_TO_NATIVE(STACK(0)));
STACK_POP();
}while(0));

INSTRUCTION(BAND, do{
ASSERT_TYPE(STACK(0), NUM); ASSERT_TYPE(STACK(1), NUM);
STACK(1) = MAKE_NUM(NUM_TO_NATIVE(STACK(1)) & NUM_TO_NATIVE(STACK(0)));
STACK_POP();
}while(0));

/* Comparison */
/* Note that EQ is type agnostic where as LT requires its arguments
are integers. */

INSTRUCTION(EQ, do{
STACK(1) = (STACK(0) == STACK(1) ? TRUE_VAL : FALSE_VAL);
STACK_POP();
}while(0));

INSTRUCTION(LT, do{
ASSERT_TYPE(STACK(0), NUM); ASSERT_TYPE(STACK(1), NUM);
STACK(1) = (STACK(0) < STACK(1) ? TRUE_VAL : FALSE_VAL);
STACK_POP();
}while(0));

/* Memory access */

/* Since we don't allow explicit arithmetic on pointers the load and
store operations use a base pointer (second to top element of
stack) and an integer offset (top element of stack). We use the
size field of the pointer to ensure memory safety.

For stores, the value to store is the third thing on the stack. */

INSTRUCTION(STOR, do{
ASSERT_TYPE(STACK(0), NUM)
ASSERT_TYPE(STACK(1), PTR);
if(NUM_TO_NATIVE(STACK(0)) < 0 ||
NUM_TO_NATIVE(STACK(0)) >= PTR_SIZE(STACK(1))){
fprintf(stderr, "Invalid store: offset %d out of bounds\n",
NUM_TO_NATIVE(STACK(0)));
exit(1);
}
memory[PTR_TARGET(STACK(1)) + NUM_TO_NATIVE(STACK(0))] = STACK(2);
STACK_POP();
STACK_POP();
STACK_POP();
}while(0)
);

INSTRUCTION(LOAD, do{
ASSERT_TYPE(STACK(0), NUM);
ASSERT_TYPE(STACK(1), PTR);
if(NUM_TO_NATIVE(STACK(0)) < 0 ||
NUM_TO_NATIVE(STACK(0)) > PTR_SIZE(STACK(1))){
fprintf(stderr, "Invalid load: offset %d out of bounds\n",
NUM_TO_NATIVE(STACK(0)));
exit(1);
}
STACK(1) = memory[PTR_TARGET(STACK(1)) + NUM_TO_NATIVE(STACK(0))];
STACK_POP();
}while(0));

/* Memory Allocation */
INSTRUCTION(ALOC, do{
long __tmp;
ASSERT_TYPE(STACK(0), NUM);
HEAP_CHECK(NUM_TO_NATIVE(STACK(0)));
__tmp = MAKE_PTR(hp - memory, NUM_TO_NATIVE(STACK(0)));
hp += NUM_TO_NATIVE(STACK(0));
STACK(0) = __tmp;
}while(0));

/* I/O */
INSTRUCTION(GETC, STACK_PUSH(MAKE_CHAR(getchar())));
INSTRUCTION(DUMP, DO_DUMP(stdout));
INSTRUCTION(PINT, ASSERT_TYPE(STACK(0), NUM);
printf("%ld", NUM_TO_NATIVE(STACK(0))); STACK_POP());
INSTRUCTION(PCHR, ASSERT_TYPE(STACK(0), VCONST);
putchar(CHAR_TO_NATIVE(STACK_POP())));

/* Root register manipulation */
INSTRUCTION(RDRR, STACK_PUSH(rr));
INSTRUCTION(WTRR, rr = STACK_POP());

/* Type checking */
INSTRUCTION(ISNUM, STACK(0) = (IS_NUM(STACK(0))) ? TRUE_VAL : FALSE_VAL);
INSTRUCTION(ISLCONST, STACK(0) = (IS_LCONST(STACK(0))) ? TRUE_VAL : FALSE_VAL);
INSTRUCTION(ISPTR, STACK(0) = (IS_PTR(STACK(0))) ? TRUE_VAL : FALSE_VAL);
INSTRUCTION(ISBOOL, STACK(0) = (IS_BOOL(STACK(0))) ? TRUE_VAL : FALSE_VAL);
INSTRUCTION(ISCHR, STACK(0) = (IS_CHAR(STACK(0))) ? TRUE_VAL : FALSE_VAL);
INSTRUCTION(ISINS, STACK(0) = (IS_INS(STACK(0))) ? TRUE_VAL : FALSE_VAL);
return 0;
}

/*
And we're done. Next time will be either a compiler for a simple
language to this interpreter (hopefully a better compiler than the
one in the tarball), or some form of JIT add-on to the interpreter.
*/






Saturday, January 24, 2009

CTL Model Checking

One of my big interests in computer science is the notion of programs that treat
other programs as data. This includes compilers, interpreters, and programs
that prove properties of other programs. Since this is the first real post to
this blog, I thought I'd make it a good one: model checking.

This post is a literate haskell program (in "Bird style"). A plain text version is
available at:
http://aaron.13th-floor.org/src/model-checking/ctl-checker.lhs

You should be able to compile it directly using GHC.

Today's problem is the following: Suppose we have a program that looks vaguely
like:

do{
cmd <- waitForInput;
resp <- executeCommand cmd;
if resp == Failure
errorflag <- true;
exit;
else
print "Success"
}while(true)

And we would like to prove that whenever we get to the top of the loop, there is
some way for the program to eventually output "Success"

This is a toy example so we can tell by looking that if (and only if)
executeCommand returns something other than Failure then the program will
eventually print "Success" (most of you probably arrived at this using an
informal mental process similar to proving Hoare triples using Djikstra's
Weakest Precondition; that's not what we're doing today..maybe later). Our
approach will be different, instead of looking at the program as a sentence in
some axiomatic system (a la Hoare, Djikstra et al) we will look at it as a
non-deterministic state machine and we will verify our property by showing that
there exist runs of the machine which by waiting for input, and eventually
transition to writing output. In short, we will be writing a model checker.
Specifically we will write an explicit state Computation Tree Logic (CTL) model
checker.

Before we begin, we need to import a number of packages for manipulating basic
data types.

> import Array
> import List
> import Maybe

We'll also be using Parsec to build parsers for state transition systems, and
CTL propositions so we need a number of packages for that:

> import System.Environment
> import Text.ParserCombinators.Parsec.Language
> import Text.ParserCombinators.Parsec
> import Text.ParserCombinators.Parsec.Expr
> import qualified Text.ParserCombinators.Parsec.Token as P

We also need a couple of list manipulation functions that aren't part of the
List package. We'll get them out of the way here

> difference :: Eq a => [a] -> [a] -> [a]
> difference xs ys = [x | x <- xs, not (any (\y -> x == y) ys)]
>
> elemOf :: Eq a => [a] -> a -> Bool
> elemOf [] _ = False
> elemOf (y:ys) x = if x == y then True else elemOf ys x

Explicit state model checking roughly involves verifying properties of state
transition systems by searching the state space to show that the property holds
for all reachable states in the system.

To do this we'll need three key elements:
- A representation of state transition systems
- A logic for specifying the properties we wish to check
- Routines which can verify that a system does (or doesn't) exhibit a
particular property.

Specifically the state transition systems we wish to model are known as ;Kripke
structures
and are defined by a finite set of states (including a distinguished
initial state), a set of atomic propositions (i.e., facts that may be true or
false) a function which given a state returns the set of propositions that are
true in the state, a transition function which given the current state of the system
returns the set of states to which the system may transition.

A Kripke structure could be defined by two enumerated data types (one for
states, and one for propositions), an element of the state type defined to be
the initial state, a function from states to lists of propositions, and a
transition function mapping states to lists of states.

> data ExampleState = Waiting | Executing | Success | Failure
> data ExampleProposition = AcceptingInput | OutputReady | ErrorFlag

> exampleInitialState = Waiting

> examplePropsForState :: ExampleState -> [ExampleProposition]
> examplePropsForState Waiting = [AcceptingInput]
> examplePropsForState Executing = []
> examplePropsForState Success = [OutputReady]
> examplePropsForState Failure = [ErrorFlag]

> exampleTransitions :: ExampleState -> [ExampleState]
> exampleTransitions Waiting = [Waiting, Executing]
> exampleTransitions Executing = [Success, Failure]
> exampleTransitions Success = [Waiting]
> exampleTransitions Failure = [Failure]

This would work well if our goal was to actually execute the system. We could
write functions like:

> exampleStatesAfterNSteps :: Int -> [ExampleState]
> exampleStatesAfterNSteps n = statesAfterNStepsFrom n exampleInitialState
> where statesAfterNStepsFrom n state = if n == 0
> then [state]
> else (concatMap
> (statesAfterNStepsFrom (n - 1))
> (exampleTransitions state))

But that's not really what we're after here. Our goal is to be able to verify
more interesting properties about transition systems. For example, we may want
to know that it is always the case that the system will eventually accept more
input, or raise the error flag. For this we'll need to represent Kripke
structures as first class objects (types aren't first class objects, so neither
is our above definition; we can't pass it as an argument to a function!).

Kripke structures can be thought of as directed graphs where each node
corresponds to a state and is labeled by the atomic propositions that are true
in that state, and edge between two states indicates that the system may
transition from the source state to the destination.

We'll enhance this just a little and allow the user to provide descriptive names
for the states. Both state names and atomic propositions will be represented as
strings, so we'll add some type aliases to help keep things straight.

> type StateIdx = Int
> type StateName = String
> type AtomicProp = String

We'll represent kripke structures as an array of nodes and an adjacency matrix
describing the edges of the graph. (sidenote: from here on out I'll tend to use
the terms Graph and Node rather than Kripke Structure and State; for the
purposes of this discussion the terms are synonymous).

> type Matrix b = Array StateIdx (Array StateIdx b)
> data Kripke = K {ss :: Array StateIdx (StateName,[AtomicProp]),
> es :: Matrix Bool}

Many times we want to talk about states in isolation (rather than whole
systems), but in order to get the state's successors, predecessors, etc we need
to haul the graph around, and know the state's index in the node array. To ease
this burden, we'll introduce a new type for states.

> data State = S {idx :: StateIdx,
> name :: StateName,
> props :: [AtomicProp],
> sys :: Kripke
> }

Hopefully this all makes sense and seems fairly natural thus far. Before we
move on to CTL, we'll introduce some basic functions for playing with kripke
structures and states

Get a particular state

> state :: StateIdx -> Kripke -> State
> state i g = S i (fst s) (snd s) g
> where s = (ss g)!i

Get a list of all the states:

> states :: Kripke -> [State]
> states g = map (\x -> state x g) (indices.ss $ g)


Impose an ordering on states (we'll just assume they belong to the same system)

> instance Ord State where
> compare a b = compare (idx a) (idx b)

Equality on states (again assuming same system)

> instance Eq State where
> a == b = (idx a) == (idx b)

And we'd like to print out states (this should really be done using shows to
avoid the non-tail recursive appending, but meh).

> instance Show State where
> show n = "{"++(name n)++"}"

Get all the successors of a given state (i.e., the states to which the system may
transition from the given state)

> successors :: State -> [State]
> successors s = [s' | s' <- (states.sys $ s), (es.sys $ s)!(idx s)!(idx s')]

And actually making a kripke structure from a list of pairs describing atomic propositions
and adjacencies

> mk_kripke :: [((StateName, [AtomicProp]), [StateIdx])] -> Kripke
> mk_kripke xs = (K
> (listArray (0,(length xs)-1) (map fst xs))
> (listArray (0,(length xs)-1)
> (map (\x -> (array (0,(length xs)-1)
> [(i,any (\z -> z == i) (snd $ x))
> | i <- [0..(length xs)-1]]))
> xs)))
>

Now we can model the example Kripke structure above using this representation

> exampleKripke = mk_kripke [(("Waiting", ["AcceptingInput"]), [1]),
> (("Executing", []), [2, 3]),
> (("Success", ["OutputReady"]), [0]),
> (("Failure", ["ErrorFlag"]), [3])]

And we can write a generic version of the statesAfterNSteps function:

> statesAfterNSteps :: Kripke -> Int -> [StateName]
> statesAfterNSteps g n = map name (statesAfterNStepsFrom n (state 0 g))
> where statesAfterNStepsFrom n s = if n == 0
> then [s]
> else (concatMap
> (statesAfterNStepsFrom (n - 1))
> (successors s))

Great, that's all we need as far as Kripke/Graph manipulation functions. Now's
probably a good time to grab a beer or something. We're moving into deeper
waters pretty soon.

It's time to start thinking about CTL.

CTL is like propositional logic, except that instead of expressions being
evaluated with respect to a state (a mapping of truth-values to atomic
propositions), the expressions are evaluated with respect to a Kripke structure
in some state

CTL includes all the standard operators of propositional logic like /\ (and), \/ (or),
and ~ (not). But adds a notion of time.

Specifically CTL uses a branching notion of time (hence Tree logic). The idea
is that the current state is the root of a tree, all of the successors of the
current states are the children of the root, and their successors are the
grand-children of the root (arranged properly). One can think of this as an
unrolling the graph of the Kripke structure into an infinite height tree by
inserting fresh copies of each node everytime it appears. In the example kripke
system above, the first few levels of the tree might look like:

> data ExampleStateTree = ExTree (ExampleState, [ExampleStateTree])
> exampleTree :: ExampleStateTree
> exampleTree = ExTree (Waiting,
> [ExTree (Executing,
> [ExTree (Success,
> [ExTree (Waiting,
> [ExTree (Executing,
> [ExTree (Success, []),
> ExTree (Failure, [])])])]),
> ExTree (Failure,
> [ExTree (Failure,
> [ExTree (Failure,
> [ExTree (Failure, [])])])])])])

To handle the branching time concept, CTL includes two operators for quantifying
over paths: forall paths (written A), and there exists a path (written E). In
CTL all temporal operators must be quantified using one of these two operators
(this is not true for CTL*).

The primitive temporal operators of CTL are: next, always, and until. Combining
these operators with modalities and propositional phrases allows statements such
as:
AX(y)
meaning:
"on all paths (from the current state) the proposition 'y' is true in the
next state"
and:
E (x U y)
meaning:
"there exists a path (from the current state) the proposition 'x' is true
until the proposition 'y' is true (and 'y' is eventually true)."

Other common operators in CTL finally (F), release (R), and the weak until which
can be derived as:
AF x = A(true U x)
A(x R y) = A(~ (~ x U ~ y) )
A(x W y) = A(y R (y \/ x))

Note: the same derivations hold for quantification using E instead of A.

The release operator may not be immediately obvious, the intuition is that x R y
states that y must be true until both y and x are true at the same time,
i.e. that x releases the obligation of holding y true. Weak until is like the
normal until but without the obligation that the second propisition ever become
true (in CTL* we could write x W y as ((x U y) \/ G x), but this would not be
valid CTL because the two temporal operators would be grouped under the same
path quantifier which is invalid).

This isn't a very rigorous discussion of CTL, but it should get the gist of it
across. And it is detailed enough that we can express the property we wish to
prove as a CTL proposition:
(AcceptingInput /\ EF OutputReady)

As we move through the model checker you will see the exact semantics of each
operator of CTL and later the complete syntax for the language (since we'll be
writing a parser).

The important question now is: how should we represent CTL expressions? An
obvious choice might be to represent propositions as:

> type Prop_ex1 = State -> Bool

One attractive feature of this is we might be able to write our checker very
easily as:

> check_ex1 :: Prop_ex1 -> Kripke -> Bool
> check_ex1 p g = p (state 0 g)

The problem is that while we can write some of the basic CTL operators very
easily with this representation as in:

> andp_ex1 :: Prop_ex1 -> Prop_ex1 -> Prop_ex1
> andp_ex1 p1 p2 = (\n -> if (p1 n) then (p2 n) else False)

> orp_ex1 :: Prop_ex1 -> Prop_ex1 -> Prop_ex1
> orp_ex1 p1 p2 = (\n -> if (p1 n) then True else (p2 n))

These aren't too bad, but we'll see later on that writing the temporal operators
out in this way would be quite bothersome. Instead of using a checker like
check_ex1, our approach will be to determine the set of all states which satisfy
the CTL proposition, then checking to see if the initial state is in that set.

This suggests the following type for propositions:

> type Prop = [State] -> [State]

This has the nice properties that Propositions are composable, and don't require
a special evaluator function. We can now easily write the basic operators from
propositional logic:

> satisfies :: AtomicProp -> Prop
> satisfies a ss = [s | s <- ss, any (\b -> b==a) (props s)]
>
> notp :: Prop -> Prop
> notp p ss = difference ss (p ss)
>
> orp :: Prop -> Prop -> Prop
> orp p1 p2 ss = union (p1 ss) (p2 ss)
>
> andp :: Prop -> Prop -> Prop
> andp p1 p2 ss = p2.p1 $ ss

Note how elegant the definition of /\ becomes; because propositions act like
filters composition acts like conjunction. Very sexy.

The most basic temporal operator in CTL is next. Since we know that temporal
operators are always paired with path quantifiers, we will write two
propositions for next: one quantified by exists, the other by forall.

> exists_next :: Prop -> Prop
> exists_next p ss = [s | s <- ss, (p.successors $ s) /= []]
>
> forall_next :: Prop -> Prop
> forall_next p ss = [s | s <- ss, (difference (successors s)
> (p.successors $ s)) == []]

Next is fairly simple because it only needs to consider a predefined set of
states: the successors of the currents state. But how can we represent the more
forward looking operators like Always or Until? There's no way to know how far
into the future we need to look to verify these.

This is where we begin to really motivate the choice of representation for Prop.

The approach we take relies on two key observations:
1) A Kripke structure contains a finite number of states
2) The remaining CTL temporal operators can be expressed as greatest or
least fixed points of monotonic functions on the lattice of sets of states
of the Kripke structure.

The first point is hopefully obvious, the second requires a bit of explaining.

A lattice is just a collection of elements with a partial ordering and the
property that any two elements have a greatest lower bound and a least upper
bound (and consequently any finite, nonempty set of elements which is the real
definition). In this case the elements are sets of states, and the ordering is
the subset relation.

For example if

> set1_ex1 = [Waiting, Executing]
> set2_ex1 = [Executing, Failure]
> set3_ex1 = [Waiting, Executing, Failure]
> set4_ex1 = [Executing]

Then both set3_ex1 is the least upper bound of set1_ex1 and set2_ex1, that is
set3_ex1 is the smallest set that contains both set1_ex1 and set2_ex1 as
subsets. And set4_ex1 is the greatest lower bound of set1_ex1, and set2_ex1.
An important sidenote is that the least upper bound may also be called the LUB,
the join, or the supremum and the greatest lower bound may be called the GLUB,
the meet, or the infimum).

All finite lattices have a top element that is greater than all other elements
and a bottom element that is less than all other elements. In the case of a
lattice where the elements are the subsets of some other set, the top is the
original set and the bottom is the empty set.

In our case this would be:

> top_ex1 = [Waiting, Executing, Success, Failure]
> bottom_ex1 = []

A monotonic function (f) on a lattice is one for which either
x <= (f x) for all x or (f x) >= x

Which is to say that the function either always moves up the lattice, or always
moves down the lattice. An important theorem of lattice theory states that a
montonic function on a finite lattice has a unique least and greatest fixed
points (this is actually a corrolary to the stronger Knaster Tarski theorem
which states that any order preserving function on a complete lattice has a
complete lattice of fixed points, but that is irrelevent). The effect of this
is both obvious and profound at the same time; if you keep moving up
(respectively down) a finite lattice, you eventually stop.

To write our functions for the other temporal operators of CTL, we need to
understand the sense in which these operators are fixpoints of recursive
functions on the lattice of sets of states.

Let's start with AG (forall always)

The key observation is to see that a state satisfies (AG p) if and only if the
state satisfies p, and all of its successors also satisfy (AG p).

This suggests a strategy for defining forall_always: start by considering all
states satisfying p as candidates, then recursively eliminate candidates which
don't satisfy (forall_next p) setting p <- (forall_next p) at each recursion.

> forall_always_ex1 p xs = helper p (p xs)
> where helper p' candidates = let candidates' = (intersect candidates
> (p' candidates)) in
> if candidates' == candidates
> then candidates
> else (helper (forall_next
> (intersect candidates))
> candidates')

The key here is that at each recursive call to helper, candidates' must be a
subset of candidates so eventually it must either reach the empty list (the
bottom of the lattice), or the two will be equal. This is what is meant by a
greatest fixed point. We start at the top of the lattice (the set of all
states) and at each recursion we move step down the lattice (to some subset of
the previous candidate set) until we get stuck. This is guaranteed to work
because the lattice is finite (size = 2 ^ number of states), and the candidate
set shrinks at every stage.

In fact we can generalize the structure of the helper function above into a
greatest fixed point operator

> gfp :: Prop -> Prop
> gfp f ss = let ss' = intersect ss $ f ss in
> if ss' == ss then ss else gfp f ss'

No we can define forall_always easily as:

> forall_always :: Prop -> Prop
> forall_always p g = gfp (\ss -> forall_next (intersect ss) ss) (p g)

exists_always, forall_release, and exists_release are all greatest fixed points
and can be defined simarly

> exists_always :: Prop -> Prop
> exists_always p g = gfp (\ss -> exists_next (intersect ss) ss) (p g)
>
> forall_release :: Prop -> Prop -> Prop
> forall_release p1 p2 g = gfp (\ss -> (orp p1
> (andp (forall_next (intersect ss)) p2)
> ss)) (p2 g)
>
> exists_release :: Prop -> Prop -> Prop
> exists_release p1 p2 g = gfp (\ss -> (orp p1
> (andp (exists_next (intersect ss)) p2)
> ss)) (p2 g)

The other operators (forall/exists_eventually and forall/exists_until) are least
fixed point operators. With these we begin by assuming that no states satisfy
the proposition and add nodes from a candidate set until we either run out of
states to add.

We'll start by defining forall_eventually analogously to how we defined
forall_always_ex1 above. Clearly any state in (p x) should also be in
(forall_eventually p x), as should any state satisfying (forall_next p x), and
(forall_next (forall_next p x)), and so on.

> forall_eventually_ex1 :: Prop -> Prop
> forall_eventually_ex1 p xs = helper p (p xs) (difference xs (p xs))
> where helper p sats cs = let newsats = p cs in
> if newsats == [] then sats
> else let sats' = union sats newsats in
> (helper (forall_next (intersect sats')) sats'
> (difference cs newsats))

Note that in the least fixpoint case we have to track two different sets of
state: those states which definitely satisfy the proposition (called sats) and
those which we have yet to consider (called cs for candidates). At each step we
find the elements in cs for which all successors are known to satisfy the
previous proposition and move them to sats. We then recur with the new set of
satisfying nodes and candidates.

Like the greatest fixed point function, we can generalize the structure of
forall_eventually_ex1 into a least fixed point function:

> lfp :: ([State] -> Prop) -> [State] -> Prop
> lfp f ss c = let ss' = union ss $ f ss candidates in
> if ss' == ss then ss else lfp f ss' candidates
> where candidates = difference c ss

Then define forall_eventually in terms of lfp:

> forall_eventually :: Prop -> Prop
> forall_eventually _ [] = []
> forall_eventually p (s:ss) = intersect (s:ss) fp
> where fp = lfp (\ss -> forall_next (intersect ss)) (p allss) allss
> allss = states.sys $ s

This is a bit peculiar we would really prefer the definition to just be the
fp = ...
part as it is for the greatest fixed point operators. But that doesn't actually
work in this case. To see why consider the following proposition:
AcceptingInput /\ (EF OutputReady)

If we define exists_eventually the way we'd like to as:

> exists_eventually_ex1 :: Prop -> Prop
> exists_eventually_ex1 p ss = lfp (\ss -> exists_next (intersect ss)) (p ss) ss

then we could evaluate this proposition on our example system using

> evaluate_ex1 = andp (satisfies "AcceptingInput")
> (exists_eventually_ex1 (satisfies "OutputReady"))
> (states exampleKripke)

By inspection we can see that the Waiting state does satisfy this property:

AcceptingInput is true,
and OutputReady is true in the Success state which is reachable via the path
Waiting -> Executing -> Success.

But evaluate_ex1 is []. What happened? If we unfold the andp from the
expression above we see the problem:

exists_eventually_ex1 (satisfies "Success")
(satisfies "AcceptingInput" (states exampleKripke))

The proposition (satisfies "AcceptingInput" ...) will return only the node
Waiting so when the exists_eventually_ex1 is unfolded we'll get

lfp (\ns -> exists_next (intersect ns)) (satisfies "Success" Waiting)
(satisfies "Success" Waiting)

But Waiting doesn't satisfy Success, so the initial candidate set will be empty
and thus no states will ever be added. Instead we need to start the candidate
set as all states in the system, and intersect with the true candidate set at
the end.

> exists_eventually :: Prop -> Prop
> exists_eventually _ [] = []
> exists_eventually p (s:ss) = intersect (s:ss) fp
> where fp = lfp (\ss -> exists_next (intersect ss)) (p allss) allss
> allss = states.sys $ s

> forall_until :: Prop -> Prop -> Prop
> forall_until _ _ [] = []
> forall_until p1 p2 (s:ss) = intersect (s:ss) fp
> where fp = lfp (\ss -> andp (forall_next (intersect ss)) p1) (p2 allss) allss
> allss = states.sys $ s

> exists_until :: Prop -> Prop -> Prop
> exists_until _ _ [] = []
> exists_until p1 p2 (s:ss) = intersect (s:ss) fp
> where fp = lfp (\ss -> andp (exists_next (intersect ss)) p1) (p2 allss) allss
> allss = states.sys $ s

That's the end of the actual program logic. Now all we need is a parser for CTL
formulas, a parser for Kripke structures, and a main function that ties
everything together.

Parsec provides some great machinery for writing top down recursive descent
parsers. The basic idea is that we must write functions which recognize each
terminal in our target grammar, and then build functions for recognizing
non-terminals through composition of recognizers as specified by the grammar
rules. Parsec can be a little daunting at first because it uses a lot of rather
peculiar looking types, don't worry; it's easier than it looks. For full
information See the Parsec web page:
;http://legacy.cs.uu.nl/daan/parsec.html;

We'll begin with the Kripke parser.

We'll use a pretty simple format for kripke structure specification, we just
want to read things like:

state "waiting": props: [accepting_input] goes_to: ["executing"]
state "executing": props: [] goes_to: ["success", "failure"]
state "success": props: [output_ready] goes_to: ["waiting"]
state "failure": props: [error_flag] goes_to: ["failure"]

The grammar for this simple Kripke structure language looks like:

kripke ::= <statedef> <statedeflist>
stateDef ::= state <statename>: props: [<proplist>] goes_to: [<statenamelist>]
stateName ::= string_literal
prop ::= <propstart> <propchars>
propStart ::= <lowercase>
|= _
propChars ::= <propchar><propchars>
|=
propChar ::= <propstart>
|= <digit>
stateDefList ::= <satedef> <statedeflist>
|=
propList ::= <prop>, <proplist>
|=
stateNameList ::= <statename>, <statenamelist>
|=

This grammar is completely unambiguous which makes the parser fairly easy to
write. The parser is a function which given a String produces either an error,
or a Graph (as defined earlier) thus, it will have the type:

> parse_kripke :: String -> Either ParseError Kripke

The definition of mk_kripke depends on how we choose to represent parsed state
definitions. In order to use the mkgraph function to build a graph from a list
of nodes we would like the state parsing function to return triples like;
((name, propositions), destinations)

Unfortunately, that won't work. The transitions are described by state names in
the input file but mkgraph expects them to be node indices. What we need is a
lookup function that given a state name and a list of (index, name) pairs will
return the correct index:

> state_lookup :: String -> [(Int, String)] -> Maybe Int
> state_lookup _ [] = Nothing
> state_lookup s ((idx,name):xs) = if s == name then Just idx else state_lookup s xs

This would do the trick, except that the language has to allow for forward
references to states (i.e., a state can transition to a state that hasn't been
defined yet, and thus doesn't have an index). So we can't simply call
state_lookup while parsing each state because we don't yet know the complete
mapping between states and indexes. To get around this, we use the power of
first order functions to give the state parser the type:

> k_state :: Parser ((StateName, [AtomicProp]), [[(Int, String)] -> Maybe StateIdx])

Similarly parsing a state_list returns a list of these beasts

> k_state_list :: Parser [((StateName, [AtomicProp]), [[(StateIdx, String)] -> Maybe StateIdx])]

This makes parse_kripke a bit more complicated because it is now responsible for
generating the (index, state) mapping, and then evaluating the last component of
each parsed state to generate the appropriate argument to pass to mkgraph.

> parse_kripke s = case parse (do{p <- k_state_list ; eof ; return p}) "" s of
> Left err -> Left err
> Right ns -> Right $ mk_kripke.map (\n -> resolve n) $ ns
> where resolve (x,ts) = (x,mapMaybe (\t -> t idxes) ts)
> idxes = gen_index 0 (map (fst.fst) ns)
> gen_index _ [] = []
> gen_index i (x:xs) = (i,x):(gen_index (i+1) xs)

From here on out, Parsec does the heavy lifting for us. We just define Parsers
for each of the non-terminals in the Kripke grammar and string them together as
needed.

As such, the commentary becomes rather sparse bordering on non-existant. After
all, this isn't really meant to be a Parsec tutorial. If you're really curious
about what's going on read this in parallel with the Parsec documentation,
otherwise I'd recommend just taking it on faith that this is a parser.

> k_atomic_prop :: Parser String
> k_state_name :: Parser String
> k_prop_list :: Parser [String]
> k_trans_list :: Parser [String]

> kripke_lexer = P.makeTokenParser (emptyDef {reservedNames = ["state", "props",
> "goes_to"],
> identStart = lower <|> char '_',
> identLetter = alphaNum <|> char '_'
> })

> whiteSpace = P.whiteSpace

> k_atomic_prop = P.identifier ctl_lexer
> k_state_name = P.stringLiteral kripke_lexer
> k_reserved = P.reserved kripke_lexer
> k_squares = P.squares kripke_lexer
> k_sep = P.commaSep kripke_lexer
> k_colon = P.colon kripke_lexer

> k_prop_list = do k_reserved "props" ;
> k_colon ;
> x <- k_squares (k_sep k_atomic_prop) ;
> return x

> k_trans_list = do k_reserved "goes_to";
> k_colon ;
> x <- k_squares (k_sep k_state_name) ;
> return x


> k_state = do k_reserved "state" ;
> n <- k_state_name ;
> k_colon ;
> ps <- k_prop_list ;
> trans <- k_trans_list ;
> return ((n, ps),
> if trans /= []
> then map state_lookup trans
> else [state_lookup n])

> k_state_list = do whiteSpace kripke_lexer ;
> many1 k_state

We're going to be reading Kripke structures from a file, so we might as well
encapsulate that a helper function

> load_kripke :: String -> IO (Either ParseError Kripke)
> load_kripke f = do s <- readFile f
> return $ parse_kripke s

That's it. We're done with the kripke parser! Now on to CTL.

The grammar for CTL propositions looks like this:
prop ::= atomic
|= (prop)
|= ~ prop
|= prop /\ prop
|= prop \/ prop
|= AX prop
|= AX prop
|= AG prop
|= AF prop
|= EG prop
|= EF prop
|= A prop U prop
|= A prop R prop
|= E prop U prop
|= E prop R prop
atomic ::= lowercase [alphaNum_]+

> prop :: GenParser Char () Prop

> prop = buildExpressionParser table basic
> "proposition"


> ctl_lexer = P.makeTokenParser (emptyDef {identStart = lower <|> char '_',
> identLetter = alphaNum <|> char '_',
> reservedOpNames = ["AG", "EG",
> "AX", "EX",
> "AF", "EF",
> "A", "E",
> "U", "R",
> "\\/", "\\/", "~"] })

> ctl_reserved = P.reservedOp ctl_lexer
> ctl_parens = P.parens ctl_lexer
> ctl_atomic_prop = P.identifier ctl_lexer
>
>
> data PropCompletion = Until Prop | Release Prop
>
> make_forall p1 (Until p2) = forall_until p1 p2
> make_forall p1 (Release p2) = forall_release p1 p2
> make_exists p1 (Until p2) = exists_until p1 p2
> make_exists p1 (Release p2) = exists_release p1 p2
>
> partial :: Parser (PropCompletion -> Prop)
> partial = do { ctl_reserved "E" ; p <- prop ; return $ make_exists p}
> <|> do { ctl_reserved "A" ; p <- prop ; return $ make_forall p}
>
> completion :: Parser PropCompletion
> completion = do { ctl_reserved "U" ; p <- prop ; return $ Until p }
> <|> do { ctl_reserved "R" ; p <- prop ; return $ Release p }
>
> basic = ctl_parens prop
> <|> do {p <- ctl_atomic_prop ; return $ satisfies p}
> <|> do {p1 <- partial ; p2 <- completion ; return $ p1 p2 }
>
> table :: OperatorTable Char () Prop
> table = [[pfx "~" notp],
> [pfx "AG" forall_always, pfx "EG" exists_always,
> pfx "AX" forall_next, pfx "EX" exists_next,
> pfx "EF" exists_eventually, pfx "AF" forall_eventually],
> [op "/\\" andp AssocLeft, op "\\/" orp AssocLeft]]
> where
> pfx s f = Prefix (do {ctl_reserved s ; return f})
> op s f assoc = Infix (do {ctl_reserved s ; return f}) assoc
>
>

Ok we made it through the parsers all we have left is to wire stuff
together. Everything's pretty simple except that we have to deal with IO and the
potential for errors and so forth. Nothing really exciting.

>
> check :: Kripke -> String -> IO ([State])
> check g s =
> case parse (do{ whiteSpace ctl_lexer ;
> p <- prop ;
> whiteSpace ctl_lexer ;
> eof ;
> return p}
> ) "" s of
> Left err -> do { putStr "parse error at " ; print err ; return []}
> Right x -> return $ sort.x.states $ g
>

> main =
> do prog <- getProgName ;
> args <- getArgs ;
> case args of
> [] -> putStr $ "Usage "++prog++" <kripke_file> [props...]\n"
> (kf:ps) -> do mk <- load_kripke kf ;
> case mk of
> Left e -> do { putStr $ "parse error at " ;
> print e ;
> return () }

> Right k -> foldl (\a -> \p ->
> do{ ks <- check k p ;
> if elemOf ks (state 0 k)
> then putStr $ "Prop \""++p++"\" holds.\n"
> else putStr $ "Prop \""++p++"\" does not hold.\n"
> })
> (return ()) ps

Metapost

I'm starting this blog to share the various pieces of code I find particularly interesting. Unlike this first post, all subsequent posts will be heavily commented (often "literate") code. My main areas of interest are security, operating systems, programming language theory/implementation, and formal methods; so most of the posts will be related to one of these themes. Most of the posts will be my original code, but some will be pearls taken from elsewhere (always credited) with added commentary.

Followers