Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 08 Sep 2023 11:15:39 +0000
From:      "Mathew\, Cherry G.*" <c@bow.st>
To:        "Mathew\, Cherry G.*" <c@bow.st>
Cc:        Alexander Leidinger <Alexander@Leidinger.net>,  adridg@freebsd.org,  freebsd-hackers@freebsd.org
Subject:   Re: ARC model specified in spinroot/promela
Message-ID:  <85o7icapb8.fsf@bow.st>
In-Reply-To: <85y1hjecna.fsf@bow.st> (Cherry G. Mathew's message of "Wed, 06 Sep 2023 11:58:49 %2B0000")
References:  <85jzt96qjz.fsf@bow.st> <9c424a574cdd39fc879c9ed9192556c0@Leidinger.net> <858r9o6ee0.fsf@bow.st> <c2197ad9446eca4edae2db09a859d66b@Leidinger.net> <85pm304dzi.fsf@bow.st> <85a5u22oxx.fsf@bow.st> <85y1hjecna.fsf@bow.st>

next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> Mathew, Cherry G * <c@bow.st> writes:

>>>>> Mathew, Cherry G * <c@bow.st> writes:
    > [...]


    >> The next step is to write the equivalent C code, compile it with
    >> a simple test "driver" (ideally ATF, but since this is standalone
    >> for the moment, I'll use something simpler, just to illustrate
    >> the concept), and then hook it into spin's "modex" tool, to
    >> extract the implicit model. Once this is done, we'll have a full
    >> cycle of
    design-> implement->verify and this will demonstrate the DDD (Design
    >> Driven Development) methodology I'd love to hear your thoughts
    >> about.

    > I'm attaching below patch (with the usual instructions) to give a
    > sense of direction. At this point, the C code is purely manually
    > crafted. The next step (modex extraction) ties the original model
    > (arc.pml and friends) to the extracted code (arc.c and friends). I
    > am working on that next, and will post later in the week when
    > that's done.

Follow up as promised. Please see below patch with the usual
instructions. There's a slight complication though, in that the modex
tool isn't packaged for most OSs. It's fairly straightforward to compile
it on a posix-y build environment though.  You can grab the sources
here:
https://github.com/nimble-code/Modex

I think at this stage a few comments on the model extraction are
important: 

1) The usual pattern for spin modex is to convert the source code into C
code "snippets" that sit within the extracted promela source. What then
happens is that, subsets of the code's data state space are exported to
the model, which can then be subject to the usual LTL invariance checks.

I've taken a different approach here - slightly abusing the extractor
framework, I have basically "converted" the C source code back as close
to the original model as possible.

In order to see this in action, do the following:

$ make clean modex-gen
$ cp spinmodel.pml /tmp/spinmodel-modex.pml
$ make clean spin-gen
$ diff -u spinmodel /tmp/spinmodel-modex.pml

When you make the comparison, you will see that the extraction attempts
a source model level retrieval. I ran this by the author of spin
(Dr. Gerard Holzmann) and he said the approach looked interesting. The
problem is that the "harness language" used to drive the extraction is
very brittle and has terrible syntax IMHO (whoever thought up <TAB> as
token separator ?!!!). See arc.prx for this in action.

The advantage of this approach is that you can virtually assume that if
the model passes verification, then any logical bugs lie elsewhere.

Interestingly, I now have a good challenge, as the queue.h manipulation
I have done via arc.h does crash (at delLRU() ). So this will be a good
test of my theory. At the very least, the model extraction process is a
far more rigourous process of code review - where questions about logic
can be asked formally via the LTL constraints.

2) The modex data object export support is terrible - it only supports
basic data types, and the source parser is unable to automatically
export expressions. As you probably noticed in the arc.prx file, I've
kludged around this by doing effectively a sed s// over all expressions
in the source code. This shortcoming can be addressed by improving the
modex parser code. I have applied for a research grant in this regard,
and if it comes through, then I am committed to fixing it (in
consultation with the primary author, of course).

3) The gap between extracted code, and the "glue" that allows it to be
driven both as a final compiled program, and as a module under test can
be addressed by setting up infrastructure that I'm calling "Equivalence
Verification/Testing". For eg: the semantics of readLRU()/delLRU() are
implemented using spin "channel"s in the model, and a simple queue(3)
mechanism in the C implementation. Equivalence will guarantee that they
implement the intended semantics accurately.

I believe that the *BSD OSs have a firm USP, if we can implement modular
verification as part of the CI infrastructure - having a mirror
"Equivalence" infrastructure will make this process much easier for new
modules, as well as ingesting existing ones.

With that thought, I take leave for the weekend, and hope to put my
money where my mouth is, by debugging the current C implementation using
ideas described above.

$ make prog
$ ./arc

should give you a crash dump for gdb, which, I'm getting a SEG fault at
delLRU() well into the caching input trace (T1 has 59 out of 64 entries
filled!)

Finally, if you know anyone contracting or hiring software Engineers or
Leads, where an intersect between concurret programming and modelling is
interesting, please signpost me (offlist of course).

Have a great weekend!

-- 
~cherry

diff -urN arc.null/arc.c arc/arc.c
--- arc.null/arc.c	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.c	2023-09-07 07:56:40.245824950 +0000
@@ -0,0 +1,174 @@
+/* C Implementation of the Adaptive Replacement Cache algorithm. Written by cherry */
+
+/*
+ * We implement the following algorithm from page 10, Figure 4.
+ * https://www.usenix.org/legacy/events/fast03/tech/full_papers/megiddo/megiddo.pdf
+ *
+ *
+ *  ARC(c)
+ *  
+ *  INPUT: The request stream x1,x2,....,xt,....
+ *  INITIALIZATION: Set p = 0 and set the LRU lists T1, B1, T2, and B2 to empty.
+ *  
+ *  For every t>=1 and any xt, one and only one of the following four cases must occur.
+ *  Case I: xt is in T1 or T2. A cache hit has occurred in ARC(c) and DBL(2c).
+ *       Move xt to MRU position in T2.
+ *  
+ *  Case II: xt is in B1. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = min { p + d1,c }
+ *  	 	     where d1 = { 1 if |B1| >= |B2|, |B2|/|B1| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B1 to the MRU position in T2 (also fetch xt to the cache).
+ *  
+ *  Case III: xt is in B2. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = max { p - d2,0 }
+ *  	 	     where d2 = { 1 if |B2| >= |B1|, |B1|/|B2| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B2 to the MRU position in T2 (also fetch xt to the cache).
+ *       
+ *  Case IV: xt is not in T1 U B1 U T2 U B2. A cache miss has occurred in ARC(c) and DBL(2c).
+ *       Case A: L1 = T1 U B1 has exactly c pages.
+ *       	  If (|T1| < c)
+ *  	     	     	Delete LRU page in B1. REPLACE(xt,p).
+ *  	  	  else
+ *			Here B1 is empty. Delete LRU page in T1 (also remove it from the cache).
+ *  	  	  endif
+ *       Case B: L1 = T1 U B1 has less than c pages.
+ *       	  If (|T1| + |T2| + |B1| + |B2| >= c)
+ *  	             Delete LRU page in B2, if (|T1| + |T2| + |B1| + |B2| = 2c).
+ *  		     REPLACE(xt, p).
+ *  	  	  endif
+ *  
+ *       Finally, fetch xt to the cache and move it to MRU position in T1.
+ *  
+ *  Subroutine REPLACE(xt,p)
+ *       If ( (|T1| is not empty) and ((|T1| exceeds the target p) or (xt is in B2 and |T1| = p)) )
+ *       	  Delete the LRU page in T1 (also remove it from the cache), and move it to MRU position in B1.
+ *       else
+ *		  Delete the LRU page in T2 (also remove it from the cache), and move it to MRU position in B2.
+ *       endif
+ */
+ 
+#include "arc.h"
+
+static void arc_list_init(struct arc_list *_arc_list)
+{
+	TAILQ_INIT(&_arc_list->qhead);
+	_arc_list->qcount = 0;
+	
+	int i;
+	for(i = 0; i < ARCLEN; i++) {
+		init_arc_item(&_arc_list->item_list[i], IID_INVAL, false);
+	};
+}
+
+int p, d1, d2;
+struct arc_list _B1, *B1 = &_B1, _B2, *B2 = &_B2, _T1, *T1 = &_T1, _T2, *T2 = &_T2;
+
+void arc_init(void)
+{
+	p = d1 = d2 = 0;
+
+	arc_list_init(B1);
+	arc_list_init(B2);
+	arc_list_init(T1);
+	arc_list_init(T2);
+}
+
+struct arc_item _LRUitem, *LRUitem = &_LRUitem;
+
+static void
+REPLACE(struct arc_item *x_t, int p)
+{
+
+
+	init_arc_item(LRUitem, IID_INVAL, false);
+
+	if ((lengthof(T1) != 0) &&
+	    ((lengthof(T1) >  p) ||
+	     (memberof(B2, x_t) && (lengthof(T1) == p)))) {
+		/* XXX: d_step { ? */
+		readLRU(T1, LRUitem);
+		delLRU(T1);
+		cacheremove(LRUitem);
+		addMRU(B1, LRUitem);
+	} else {
+		readLRU(T2, LRUitem);
+		delLRU(T2);
+		cacheremove(LRUitem);
+		addMRU(B2, LRUitem);
+	}
+}
+
+void
+ARC(struct arc_item *x_t)
+{
+	if (memberof(T1, x_t)) { /* Case I */
+		delitem(T1, x_t);
+		addMRU(T2, x_t);
+	}
+
+	if (memberof(T2, x_t)) { /* Case I */
+		delitem(T2, x_t);
+		addMRU(T2, x_t);
+	}
+
+	if (memberof(B1, x_t)) { /* Case II */
+		d1 = ((lengthof(B1) >= lengthof(B2)) ? 1 : (lengthof(B2)/lengthof(B1)));
+		p = min((p + d1), C);
+
+		REPLACE(x_t, p);
+
+		delitem(B1, x_t);
+		addMRU(T2, x_t);
+		cachefetch(x_t);
+	}
+
+	if (memberof(B2, x_t)) { /* Case III */
+		d2 = ((lengthof(B2) >= lengthof(B1)) ? 1 : (lengthof(B1)/lengthof(B2)));
+		p = max((p - d2), 0);
+
+		REPLACE(x_t, p);
+
+		delitem(B2, x_t);
+		addMRU(T2, x_t);
+		cachefetch(x_t);
+	}
+
+	if (!(memberof(T1, x_t) ||
+	      memberof(B1, x_t) ||
+	      memberof(T2, x_t) ||
+	      memberof(B2, x_t))) { /* Case IV */
+
+		if ((lengthof(T1) + lengthof(B1)) == C) { /* Case A */
+			if (lengthof(T1) < C) {
+				delLRU(B1);
+				REPLACE(x_t, p);
+			} else {
+				assert(lengthof(B1) == 0);
+				readLRU(T1, LRUitem);
+				delLRU(T1);
+				cacheremove(LRUitem);
+			}
+		}
+
+		if ((lengthof(T1) + lengthof(B1)) < C) {
+			if ((lengthof(T1) +
+			     lengthof(T2) +
+			     lengthof(B1) +
+			     lengthof(B2)) >= C) {
+				if ((lengthof(T1) +
+				     lengthof(T2) +
+				     lengthof(B1) +
+				     lengthof(B2)) == (2 * C)) {
+
+					delLRU(B2);
+				}
+				
+				REPLACE(x_t, p);
+			}
+		}
+		cachefetch(x_t);
+		addMRU(T1, x_t);
+	}
+}
diff -urN arc.null/arc.drv arc/arc.drv
--- arc.null/arc.drv	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.drv	2023-09-07 16:43:18.906339006 +0000
@@ -0,0 +1,52 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by cherry */
+
+/* Note: What we're attempting in this driver file, is to generate an
+ * input trace that would exercise all code-paths of the model specified
+ * in arc.pml
+ *
+ * Feeding a static trace to the algorithm in array _x[N_ITEMS] is a
+ * acceptable alternative.
+ */
+/* #include "arc.pmh" See Makefile , spinmodel.pml */
+
+#define N_ITEMS (2 * C) /* Number of distinct cache items to test with */
+#define ITEM_REPS (C / 4) /* Max repeat item requests */
+#define N_ITERATIONS 3
+
+hidden arc_item _x[N_ITEMS]; /* Input state is irrelevant from a verification PoV */
+hidden int _x_iid = 0;
+hidden int _item_rep = 0;
+hidden int _iterations = 0;
+
+/* Drive the procs */
+init {
+
+	atomic {
+	       do
+	       ::
+	       _iterations < N_ITERATIONS ->
+	       
+			_x_iid = 0;
+			do
+			:: _x_iid < N_ITEMS ->
+			   	   init_arc_item(_x[_x_iid], _x_iid, false);
+				   _item_rep = 0;
+				   do
+				   :: _item_rep < (_x_iid % ITEM_REPS) ->
+				      		ARC(_x[_x_iid]);
+						_item_rep++;
+				   :: _item_rep >= (_x_iid % ITEM_REPS) ->
+				      		break;
+				   od
+				   _x_iid++;
+			:: _x_iid >= N_ITEMS ->
+				break;
+			od
+			_iterations++;
+		::
+		_iterations >= N_ITERATIONS ->
+			    break;
+		od
+	}
+
+}
diff -urN arc.null/arc_drv.c arc/arc_drv.c
--- arc.null/arc_drv.c	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_drv.c	2023-09-06 14:25:07.045001611 +0000
@@ -0,0 +1,33 @@
+#include "arc.h"
+
+#define N_ITEMS (2 * C) /* Number of distinct cache items to test with */
+#define ITEM_REPS (C / 4) /* Max repeat item requests */
+#define N_ITERATIONS 3
+
+static struct arc_item _x[N_ITEMS]; /* Input state is irrelevant from a verification PoV */
+static int _x_iid = 0;
+static int _item_rep = 0;
+static int _iterations = 0;
+
+/* Drive ARC() with a preset input trace */
+
+void
+main(void)
+{
+	arc_init(); /* Init module state */
+
+	while (_iterations < N_ITERATIONS) {
+		_x_iid = 0;
+		while (_x_iid < N_ITEMS) {
+			init_arc_item(&_x[_x_iid], _x_iid, false);
+			_item_rep = 0;
+			while(_item_rep < (_x_iid % ITEM_REPS) ) {
+				ARC(&_x[_x_iid]);
+				_item_rep++;
+			} 
+			_x_iid++;
+		}
+		_iterations++;
+	}
+}
+
diff -urN arc.null/arc.h arc/arc.h
--- arc.null/arc.h	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.h	2023-09-08 10:09:17.217694851 +0000
@@ -0,0 +1,169 @@
+/*
+ * The objective of the header here is to provide a set of macros that
+ * reflect the interfaces designed in arc.pmh
+ */
+
+#ifndef _ARC_H_
+#define _ARC_H_
+
+#ifdef __NetBSD__
+/* TODO: */
+#else
+/* Defaults to POSIX */
+#include <assert.h>
+#include <stddef.h>
+#include <stdbool.h>
+#endif
+
+#include "queue.h" /* We use the NetBSD version as it has no
+		    * dependencies (except for -DNULL) . */
+
+#define C 64
+
+#define ARCLEN (2 * C) /* c.f ghost cache directory length constraints in arc.inv */
+
+#define IID_INVAL -1
+
+struct arc_item {
+	TAILQ_ENTRY(arc_item) qlink;	
+	int iid;	/* Unique identifier for item */
+	bool cached;
+};
+
+struct arc_list {
+	TAILQ_HEAD(arc_qhead, arc_item) qhead;
+	int qcount;
+	struct arc_item item_list[ARCLEN]; /* We use static memory for demo purposes */
+};
+
+inline static struct arc_item * allocmember(struct arc_list *);
+inline static void freemember(struct arc_item *);
+inline static struct arc_item * findmember(struct arc_list *, struct arc_item *);
+
+#define init_arc_item(/* &struct arc_item [] */ _arc_item_addr,			\
+		      /* int */_iid, /*bool*/_cached)	do {			\
+		struct arc_item *_arc_item = _arc_item_addr;			\
+		assert(_arc_item != NULL);				\
+		_arc_item->iid = _iid;						\
+		_arc_item->cached = _cached;					\
+	} while (/*CONSTCOND*/0)
+
+#define lengthof(/* struct arc_list* */_arc_list) (_arc_list->qcount)
+#define memberof(/* struct arc_list* */_arc_list,				\
+		 /* struct arc_item* */_arc_item)				\
+	((findmember(_arc_list,							\
+		     _arc_item) != TAILQ_END(&_arc_list->qhead)) ?		\
+	 true : false)
+
+/*
+ * We follow spin's channel rx/tx semantics here: "send" means
+ * duplicate onto queue ("_arc_list.item_list!_arc_item.iid"), and
+ * recieve means "duplicate" from queue but leave the data source on
+ * queue  ("_arc_list.item_list?<_arc_item.iid>").
+ *
+ * It is an error to addMRU() on a full queue. Likewise, it is an
+ * error to readLRU() on an empty queue. The verifier is expected to
+ * have covered any case where these happen. We use assert()s to
+ * indicate the error.
+ *
+ * Note: We use spin's channel mechanism in our design, only because
+ * it's the easiest. We could have chosen another
+ * mechanism/implementation, if the semantics were specified
+ * differently due to, for eg: convention, architectural or efficiency
+ * reasons.
+ */
+#define addMRU(/* struct arc_list* */_arc_list,					\
+	       /* struct arc_item* */_arc_item) do {				\
+		_arc_list->qcount++; assert(_arc_list->qcount < ARCLEN);	\
+		struct arc_item *aitmp = allocmember(_arc_list);		\
+		assert(aitmp != NULL);						\
+		*aitmp = *_arc_item;						\
+		TAILQ_INSERT_TAIL(&_arc_list->qhead, aitmp, qlink);	\
+	} while (/*CONSTCOND*/0)
+
+#define readLRU(/* struct arc_list* */_arc_list,				\
+		/* struct arc_item* */_arc_item) do {				\
+		assert(!TAILQ_EMPTY(&_arc_list->qhead));			\
+		assert(_arc_item != NULL);					\
+		*_arc_item = *(struct arc_item *)TAILQ_FIRST(&_arc_list->qhead);\
+	} while (/*CONSTCOND*/0)
+		
+#define delLRU(/* struct arc_list* */_arc_list)					\
+	if (!TAILQ_EMPTY(&_arc_list->qhead)) {					\
+		struct arc_item *aitmp = TAILQ_FIRST(&_arc_list->qhead);	\
+		TAILQ_REMOVE(&_arc_list->qhead, aitmp, qlink);			\
+		freemember(aitmp);						\
+		_arc_list->qcount--; assert(_arc_list->qcount >= 0);		\
+	} else assert(false)
+
+#define delitem(/* struct arc_list* */_arc_list,				\
+		/* struct arc_item* */_arc_item) do {				\
+	struct arc_item *aitmp;							\
+	aitmp = findmember(_arc_list, _arc_item);				\
+	if (aitmp != TAILQ_END(&_arc_list->qhead)) {				\
+		TAILQ_REMOVE(&_arc_list->qhead, aitmp, qlink);		\
+		freemember(aitmp);						\
+		_arc_list->qcount--; assert(_arc_list->qcount >= 0);		\
+	}									\
+	} while (/*CONSTCOND*/0)
+
+#define cachefetch(/* struct arc_item* */_arc_item) do {			\
+		_arc_item->cached = true; /* XXX:TODO */			\
+	} while (/*CONSTCOND*/0)
+
+#define cacheremove(/* struct arc_item* */_arc_item)  do {			\
+		_arc_item->cached = false;	/* XXX:TODO */			\
+	} while (/*CONSTCOND*/0)
+
+#define min(a, b) ((a < b) ? a : b)
+#define max(a, b) ((a > b) ? a : b)
+	
+/* These routines deal with our home-rolled mem management for the
+ * ghost cache directory memory embedded within statically defined
+ * struct arc_list buffers.
+ * Note that any pointers emerging from these should be treated as
+ * "opaque"/cookies - ie; they should not be assumed by other routines
+ * to have any specific properties (such as being part of any specific
+ * array etc.) They are solely for the consumption of these
+ * routines. Their contents however may be freely copied/written to.
+ */
+inline static struct arc_item *
+allocmember(struct arc_list *_arc_list)
+{
+	/* Search for the first unallocated item in given list */
+	struct arc_item *aitmp = NULL;
+	int i;
+	for (i = 0; i < ARCLEN; i++) {
+		if (_arc_list->item_list[i].iid == IID_INVAL) {
+			assert(_arc_list->item_list[i].cached == false);
+			aitmp = &_arc_list->item_list[i];
+		}
+	}
+	return aitmp;
+			
+}
+	
+inline static void
+freemember(struct arc_item *_arc_item)
+{
+	init_arc_item(_arc_item, IID_INVAL, false);
+}	
+
+static inline struct arc_item *
+findmember(struct arc_list *_arc_list, struct arc_item *aikey)
+{
+	assert(_arc_list != NULL && aikey != NULL);
+	assert(aikey->iid != IID_INVAL);
+	struct arc_item *aitmp;
+	TAILQ_FOREACH(aitmp, &_arc_list->qhead, qlink) {
+			if (aitmp->iid == aikey->iid) {
+				return aitmp;
+			}
+	}
+	return aitmp; /* returns TAILQ_END() on non-membership */
+}
+
+void ARC(struct arc_item * /* x_t */);
+void arc_init(void);
+
+#endif /* _ARC_H_ */
diff -urN arc.null/arc.inv arc/arc.inv
--- arc.null/arc.inv	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.inv	2023-09-04 09:20:20.224107390 +0000
@@ -0,0 +1,59 @@
+/* $NetBSD$ */
+
+/* These are Linear Temporal Logic invariants (and constraints)
+ * applied over the statespace created by the promela
+ * specification. Correctness is implied by Logical consistency.
+ */
+ltl 
+{
+	/* Liveness - all threads, except control must finally exit */
+	eventually always (_nr_pr == 1) &&
+	/* c.f Section I. B, on page 3 of paper */
+	always ((lengthof(T1) +
+	         lengthof(B1) +
+	         lengthof(T2) +
+	         lengthof(B2)) <= (2 * C)) &&
+
+	/* Reading together Section III. A., on page 7, and
+	 * Section III. B., on pages  7,8
+	 */
+	always ((lengthof(T1) + lengthof(B1)) <= C) &&
+	always ((lengthof(T2) + lengthof(B2)) < (2 * C)) &&
+
+	/* Section III. B, Remark III.1	*/
+	always ((lengthof(T1) + lengthof(T2)) <= C) &&
+
+	/* TODO: III B, A.1 */
+
+	/* III B, A.2 */
+	always (((lengthof(T1) +
+	          lengthof(B1) +
+	          lengthof(T2) +
+	          lengthof(B2)) < C)
+		 implies ((lengthof(B1) == 0) &&
+			   lengthof(B2) == 0)) &&
+
+	/* III B, A.3 */
+	always (((lengthof(T1) +
+	          lengthof(B1) +
+	          lengthof(T2) +
+	          lengthof(B2)) >= C)
+		 implies ((lengthof(T1) +
+		 	   lengthof(T2) == C))) &&
+
+	/* TODO: III B, A.4 */
+
+	/* TODO: III B, A.5 */
+
+	/* IV A. */
+	always (p <= C) &&
+
+	/* Not strictly true, but these force us to generate a "good"
+	 * input trace via arc.drv
+	 */
+
+	eventually /* always ? */ ((lengthof(T1) == p) && lengthof(T2) == (C - p)) &&
+
+	eventually (p > 0)
+		
+}
diff -urN arc.null/arc.pmh arc/arc.pmh
--- arc.null/arc.pmh	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.pmh	2023-09-08 10:36:52.827695101 +0000
@@ -0,0 +1,38 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by cherry */
+
+#ifndef _ARC_INC
+#define _ARC_INC
+
+#define C 64 /* Cache size - use judiciously - adds to statespace */
+
+typedef arc_item {
+	int iid;    /* Unique identifier for item */
+	bool cached;
+};
+
+/* Note that we use the arc_item.iid as the member lookup handle to reduce state space */
+typedef arc_list {
+	chan item_list  = [ 2 * C ] of { int }; /* A list of page items */
+};
+
+
+#define init_arc_item(_arc_item, _iid, _cached)		\
+	{		       			\
+		_arc_item.iid = _iid;	       		\
+		_arc_item.cached = _cached;		\
+	}
+
+#define lengthof(_arc_list) len(_arc_list.item_list)
+#define memberof(_arc_list, _arc_item) _arc_list.item_list??[eval(_arc_item.iid)]
+#define addMRU(_arc_list, _arc_item) _arc_list.item_list!_arc_item.iid
+#define readLRU(_arc_list, _arc_item) _arc_list.item_list?<_arc_item.iid>
+#define delLRU(_arc_list) _arc_list.item_list?_
+#define delitem(_arc_list, _arc_item) _arc_list.item_list??eval(_arc_item.iid)
+
+#define cachefetch(_arc_item) _arc_item.cached = true
+#define cacheremove(_arc_item) _arc_item.cached = false
+
+#define min(a, b) ((a < b) -> a : b)
+#define max(a, b) ((a > b) -> a : b)
+	
+#endif /* _ARC_INC_ */
\ No newline at end of file
diff -urN arc.null/arc.pml arc/arc.pml
--- arc.null/arc.pml	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.pml	2023-09-08 10:34:47.759024169 +0000
@@ -0,0 +1,216 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by cherry */
+
+/*
+ * We implement the following algorithm from page 10, Figure 4.
+ * https://www.usenix.org/legacy/events/fast03/tech/full_papers/megiddo/megiddo.pdf
+ *
+ *
+ *  ARC(c)
+ *  
+ *  INPUT: The request stream x1,x2,....,xt,....
+ *  INITIALIZATION: Set p = 0 and set the LRU lists T1, B1, T2, and B2 to empty.
+ *  
+ *  For every t>=1 and any xt, one and only one of the following four cases must occur.
+ *  Case I: xt is in T1 or T2. A cache hit has occurred in ARC(c) and DBL(2c).
+ *       Move xt to MRU position in T2.
+ *  
+ *  Case II: xt is in B1. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = min { p + d1,c }
+ *  	 	     where d1 = { 1 if |B1| >= |B2|, |B2|/|B1| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B1 to the MRU position in T2 (also fetch xt to the cache).
+ *  
+ *  Case III: xt is in B2. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = max { p - d2,0 }
+ *  	 	     where d2 = { 1 if |B2| >= |B1|, |B1|/|B2| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B2 to the MRU position in T2 (also fetch xt to the cache).
+ *       
+ *  Case IV: xt is not in T1 U B1 U T2 U B2. A cache miss has occurred in ARC(c) and DBL(2c).
+ *       Case A: L1 = T1 U B1 has exactly c pages.
+ *       	  If (|T1| < c)
+ *  	     	     	Delete LRU page in B1. REPLACE(xt,p).
+ *  	  	  else
+ *			Here B1 is empty. Delete LRU page in T1 (also remove it from the cache).
+ *  	  	  endif
+ *       Case B: L1 = T1 U B1 has less than c pages.
+ *       	  If (|T1| + |T2| + |B1| + |B2| >= c)
+ *  	             Delete LRU page in B2, if (|T1| + |T2| + |B1| + |B2| = 2c).
+ *  		     REPLACE(xt, p).
+ *  	  	  endif
+ *  
+ *       Finally, fetch xt to the cache and move it to MRU position in T1.
+ *  
+ *  Subroutine REPLACE(xt,p)
+ *       If ( (|T1| is not empty) and ((|T1| exceeds the target p) or (xt is in B2 and |T1| = p)) )
+ *       	  Delete the LRU page in T1 (also remove it from the cache), and move it to MRU position in B1.
+ *       else
+ *		  Delete the LRU page in T2 (also remove it from the cache), and move it to MRU position in B2.
+ *       endif
+ */
+ 
+/* #include "arc.pmh" See Makefile */
+
+#define IID_INVAL -1 /* XXX: move to header ? */
+
+/* Temp variable to hold LRU item */
+hidden arc_item LRUitem;
+
+/* Adaptation "delta" variables */
+int d1, d2;
+int p = 0;
+
+/* Declare arc lists - "shadow/ghost cache directories" */
+arc_list B1, B2, T1, T2;
+
+inline REPLACE(/* arc_item */ x_t, /* int */ p)
+{
+	/*
+	 * Since LRUitem is declared in scope p_ARC, we expect it to be only accessible from there and REPLACE()
+	 * as REPLACE() is only expected to be called from p_ARC.
+	 * XXX: May need to revisit due to Modex related limitations.
+	 */
+	init_arc_item(LRUitem, IID_INVAL, false);
+	
+	if
+		::
+		(lengthof(T1) != 0) &&
+		((lengthof(T1) > p) || (memberof(B2, x_t) && (lengthof(T1) == p)))
+		->
+		{
+		       readLRU(T1, LRUitem);
+		       delLRU(T1);
+		       cacheremove(LRUitem);
+		       addMRU(B1, LRUitem);
+		}
+
+		::
+		else
+		->
+		{
+		       readLRU(T2, LRUitem);
+		       delLRU(T2);
+		       cacheremove(LRUitem);
+		       addMRU(B2, LRUitem);
+		}
+	fi
+}
+
+inline ARC(/* arc_item */ x_t)
+{
+	if
+		:: /* Case I */
+		memberof(T1, x_t)
+		->
+		{
+		       delitem(T1, x_t);
+		       addMRU(T2, x_t);
+		}
+		:: /* Case I */
+		memberof(T2, x_t)
+		->
+		{
+		       delitem(T2, x_t);
+		       addMRU(T2, x_t);
+		}
+		:: /* Case II */
+		memberof(B1, x_t)
+		->
+		d1 = ((lengthof(B1) >= lengthof(B2)) -> 1 : (lengthof(B2)/lengthof(B1)));
+		p = min((p + d1), C);
+
+		REPLACE(x_t, p);
+		{
+		       delitem(B1, x_t);
+		       addMRU(T2, x_t);
+		       cachefetch(x_t);
+		}
+		:: /* Case III */
+		memberof(B2, x_t)
+		->
+		d2 = ((lengthof(B2) >= lengthof(B1)) -> 1 : (lengthof(B1)/lengthof(B2)));
+		p = max(p - d2, 0);
+		
+		REPLACE(x_t, p);
+		{
+		       delitem(B2, x_t);
+		       addMRU(T2, x_t);
+		       cachefetch(x_t);
+		}
+		:: /* Case IV */
+		!(memberof(T1, x_t) ||
+		  memberof(B1, x_t) ||
+		  memberof(T2, x_t) ||
+		  memberof(B2, x_t))
+		->
+		if
+			:: /* Case A */
+			((lengthof(T1) + lengthof(B1)) == C)
+			->
+			if
+				::
+				(lengthof(T1) < C)
+				->
+				delLRU(B1);
+				REPLACE(x_t, p);
+				::
+				else
+				->
+				assert(lengthof(B1) == 0);
+				{
+				       readLRU(T1, LRUitem);
+				       delLRU(T1);
+				       cacheremove(LRUitem);
+				}
+			fi
+			:: /* Case B */
+			((lengthof(T1) + lengthof(B1)) < C)
+			->
+			if
+				::
+				((lengthof(T1) +
+				  lengthof(T2) +
+				  lengthof(B1) +
+				  lengthof(B2)) >= C)
+				->
+				if
+					::
+					((lengthof(T1) +
+				  	  lengthof(T2) +
+				  	  lengthof(B1) +
+				  	  lengthof(B2)) == (2 * C))
+					->
+					delLRU(B2);
+					::
+					else
+					->
+					skip;
+				fi
+				REPLACE(x_t, p);
+				::
+				else
+				->
+				skip;
+			fi
+			::
+			else
+			->
+			skip;
+		fi
+		cachefetch(x_t);
+		addMRU(T1, x_t);
+	fi
+
+}
+
+#if 0 /* Resolve this after modex extract foo */
+proctype p_arc(arc_item x_t)
+{
+	/* Serialise entry */	
+	mutex_enter(sc_lock);
+
+	ARC(x_t);
+
+	mutex_exit(sc_lock);
+}
+#endif
diff -urN arc.null/arc.prx arc/arc.prx
--- arc.null/arc.prx	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.prx	2023-09-07 17:35:35.607923403 +0000
@@ -0,0 +1,81 @@
+// Spin model extractor harness written by cherry
+//
+%F arc.c
+%X -n REPLACE
+%X -n ARC
+%H
+// Disable effects of all included files and try to implement a subset of the APIs they provide.
+#define _ARC_H_
+%%
+//%C  // c_code {}
+//%%
+//%D // c_cdecl {}
+//%%
+%L
+// We use spin primitives and data objects.
+// See %P Below
+NonState	hidden	_LRUitem
+NonState	hidden	LRUitem
+NonState	hidden	_B2
+NonState	hidden	B2
+NonState	hidden	_B1
+NonState	hidden	B1
+NonState	hidden	_T2
+NonState	hidden	T2
+NonState	hidden	_T1
+NonState	hidden	T1
+NonState	hidden	x_t
+
+
+
+assert(...		keep
+REPLACE(...		keep
+init_arc_item(...	keep
+lengthof(...		keep
+memberof(...		keep
+addMRU(...		keep
+readLRU(...		keep
+delLRU(...		keep
+delitem(...		keep
+cacheremove(...		keep
+cachefetch(...		keep
+
+
+Substitute		c_expr { ((lengthof(T1)!=0)&&((lengthof(T1)>now.p)||(memberof(B2,x_t)&&(lengthof(T1)==now.p)))) }	(lengthof(T1) != 0) && ((lengthof(T1) > p) || (memberof(B2, x_t) && (lengthof(T1) == p)))
+Substitute		c_code { now.d1=((lengthof(B1)>=lengthof(B2)) ? Int 1\n : (lengthof(B2)/lengthof(B1))); }	d1 = ((lengthof(B1) >= lengthof(B2)) -> 1 : (lengthof(B2)/lengthof(B1)))
+Substitute		c_code { now.p=min((now.p+now.d1),C); }	 p = min((p + d1), C)
+
+Substitute		c_code { now.d2=((lengthof(B2)>=lengthof(B1)) ? Int 1\n : (lengthof(B1)/lengthof(B2))); }	d2 = ((lengthof(B2) >= lengthof(B1)) -> 1 : (lengthof(B1)/lengthof(B2)));
+Substitute		c_code { now.p=max((now.p-now.d2),0); }		      	  p = max(p - d2, 0);
+Substitute		c_expr { (!(((memberof(T1,x_t)||memberof(B1,x_t))||memberof(T2,x_t))||memberof(B2,x_t))) }			!(memberof(T1, x_t) || memberof(B1, x_t) || memberof(T2, x_t) ||  memberof(B2, x_t))
+Substitute		c_expr { ((lengthof(T1)+lengthof(B1))==C) }	((lengthof(T1) + lengthof(B1)) == C)
+Substitute		c_expr { (lengthof(T1)<C) }	(lengthof(T1) < C)
+Substitute		c_expr { ((lengthof(T1)+lengthof(B1))<C) }	((lengthof(T1) + lengthof(B1)) < C)
+Substitute		c_expr { ((((lengthof(T1)+lengthof(T2))+lengthof(B1))+lengthof(B2))>=C) }	((lengthof(T1) + lengthof(T2) + lengthof(B1) + lengthof(B2)) >= C)
+Substitute		c_expr { ((((lengthof(T1)+lengthof(T2))+lengthof(B1))+lengthof(B2))==(2*C)) }	((lengthof(T1) + lengthof(T2) + lengthof(B1) + lengthof(B2)) == (2 * C))
+%%
+
+%P
+#define IID_INVAL -1 /* XXX: move to header ? */
+
+/* Temp variable to hold LRU item */
+hidden arc_item LRUitem;
+
+arc_list B1, B2, T1, T2;
+
+#define p_REPLACE(_arg1, _arg2) REPLACE(_arg1, _arg2) /* Demo arbitrary Cfunc->PMLproc transformation */
+inline p_REPLACE(/* arc_item */ x_t, /* int */ p)
+{
+
+#include "_modex_REPLACE.pml"
+
+}
+
+#define p_ARC(_arg1) ARC(_arg1)
+inline p_ARC(/* arc_item */ x_t)
+{
+
+#include "_modex_ARC.pml"
+
+}
+%%
\ No newline at end of file
diff -urN arc.null/Makefile arc/Makefile
--- arc.null/Makefile	1970-01-01 00:00:00.000000000 +0000
+++ arc/Makefile	2023-09-08 10:32:11.650504807 +0000
@@ -0,0 +1,93 @@
+# This set of spinroot related files were written by cherry
+# <c@bow.st> in the Gregorian Calendar year AD.2023, in the month
+# of February that year.
+#
+# We have two specification files and a properties file
+#
+# The properties file contains "constraint" sections
+# such as ltl or never claims (either or, not both).
+# The specification is divided into two files:
+# the file with suffix '.drv' is a "driver" which
+# instantiates processes that will ultimately "drive" the
+# models under test.
+# The file with the suffix '.pml' contains the process
+# model code, which, is intended to be the formal specification
+# for the code we are interested in writing in C.
+#
+# We process these files in slightly different ways during
+# the dev cycle, but broadly speaking, the idea is to create
+# a file called 'spinmodel.pml' which contains the final
+# model file that is fed to spin.
+#
+# Note that when we use the model extractor tool "modex" to
+# extract the 'specification' from C code written to implement
+# the model defined above. We use a 'harness' file (see file with
+# suffix '.prx' below.
+#
+# Once the harness has been run, spinmodel.pml should be
+# synthesised and processed as usual.
+# 
+# The broad idea is that software dev starts by writing the spec
+# first, validating the model, and then implementing the model in
+# C, after which we come back to extract the model from the C file
+# and cross check our implementation using spin.
+#
+# If things go well, the constraints specified in the '.ltl' file
+# should hold exactly for both the handwritten model, and the
+# extracted one.
+
+spin-gen: arc.pml arc.drv arc.inv
+	cp arc.pml model #mimic modex
+	cat arc.pmh model > spinmodel.pml;cat arc.drv >> spinmodel.pml;cat arc.inv >> spinmodel.pml;
+	spin -am spinmodel.pml
+
+spin-build: #Could be spin-gen or modex-gen
+	cc -DVECTORSZ=65536 -o pan pan.c
+
+all: spin-gen spin-build prog
+
+# Verification related targets.
+spin-run: spin-build
+	./pan -a #Generate arc.pml.trail	on error
+
+# You run the trace only if the spin run above failed and created a trail
+spin-trace: spinmodel.pml.trail
+	spin -t spinmodel.pml -p -g #  -p (statements) -g (globals) -l (locals) -s (send) -r (recv)
+	./pan -r spinmodel.pml.trail -g
+
+# Build the implementation
+prog: arc.c arc.h
+	cc -g -o arc arc_drv.c arc.c
+
+# Modex Extracts from C code to 'model' - see arc.prx
+# Unfortunately there doesn't seem to be a way to specify a filename to generate
+modex-gen: arc.prx arc.c
+	modex -v -w arc.prx
+	cat arc.pmh model > spinmodel.pml;cat arc.drv >> spinmodel.pml;cat arc.inv >> spinmodel.pml;
+	spin -a spinmodel.pml #Sanity check
+
+# Housekeeping
+modex-gen-clean:
+	rm -f spinmodel.pml # Our consolidated model file
+	rm -f _spin_nvr.tmp # Never claim file
+	rm -f model # modex generated intermediate "model" file
+	rm -f pan.* # Spin generated source files
+	rm -f _modex* # modex generated script files
+	rm -f  *.I *.M
+
+prog-clean:
+	rm -f arc
+spin-run-clean:
+	rm -f spinmodel.pml.trail
+
+spin-build-clean:
+	rm -f pan
+
+spin-gen-clean:
+	rm -f spinmodel.pml # Our consolidated model file
+	rm -f _spin_nvr.tmp # Never claim file
+	rm -f model # Intermediate "model" file
+	rm -f pan.* # Spin generated source files
+
+clean: modex-gen-clean spin-gen-clean spin-build-clean spin-run-clean prog-clean
+	rm -f *~
diff -urN arc.null/queue.h arc/queue.h
--- arc.null/queue.h	1970-01-01 00:00:00.000000000 +0000
+++ arc/queue.h	2021-01-16 23:51:51.000000000 +0000
@@ -0,0 +1,655 @@
+/*	$NetBSD: queue.h,v 1.76 2021/01/16 23:51:51 chs Exp $	*/
+
+/*
+ * Copyright (c) 1991, 1993
+ *	The Regents of the University of California.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *	@(#)queue.h	8.5 (Berkeley) 8/20/94
+ */
+
+#ifndef	_SYS_QUEUE_H_
+#define	_SYS_QUEUE_H_
+
+/*
+ * This file defines five types of data structures: singly-linked lists,
+ * lists, simple queues, tail queues, and circular queues.
+ *
+ * A singly-linked list is headed by a single forward pointer. The
+ * elements are singly linked for minimum space and pointer manipulation
+ * overhead at the expense of O(n) removal for arbitrary elements. New
+ * elements can be added to the list after an existing element or at the
+ * head of the list.  Elements being removed from the head of the list
+ * should use the explicit macro for this purpose for optimum
+ * efficiency. A singly-linked list may only be traversed in the forward
+ * direction.  Singly-linked lists are ideal for applications with large
+ * datasets and few or no removals or for implementing a LIFO queue.
+ *
+ * A list is headed by a single forward pointer (or an array of forward
+ * pointers for a hash table header). The elements are doubly linked
+ * so that an arbitrary element can be removed without a need to
+ * traverse the list. New elements can be added to the list before
+ * or after an existing element or at the head of the list. A list
+ * may only be traversed in the forward direction.
+ *
+ * A simple queue is headed by a pair of pointers, one the head of the
+ * list and the other to the tail of the list. The elements are singly
+ * linked to save space, so elements can only be removed from the
+ * head of the list. New elements can be added to the list after
+ * an existing element, at the head of the list, or at the end of the
+ * list. A simple queue may only be traversed in the forward direction.
+ *
+ * A tail queue is headed by a pair of pointers, one to the head of the
+ * list and the other to the tail of the list. The elements are doubly
+ * linked so that an arbitrary element can be removed without a need to
+ * traverse the list. New elements can be added to the list before or
+ * after an existing element, at the head of the list, or at the end of
+ * the list. A tail queue may be traversed in either direction.
+ *
+ * For details on the use of these macros, see the queue(3) manual page.
+ */
+
+/*
+ * Include the definition of NULL only on NetBSD because sys/null.h
+ * is not available elsewhere.  This conditional makes the header
+ * portable and it can simply be dropped verbatim into any system.
+ * The caveat is that on other systems some other header
+ * must provide NULL before the macros can be used.
+ */
+#ifdef __NetBSD__
+#include <sys/null.h>
+#endif
+
+#if defined(_KERNEL) && defined(DIAGNOSTIC)
+#define QUEUEDEBUG	1
+#endif
+
+#if defined(QUEUEDEBUG)
+# if defined(_KERNEL)
+#  define QUEUEDEBUG_ABORT(...) panic(__VA_ARGS__)
+# else
+#  include <err.h>
+#  define QUEUEDEBUG_ABORT(...) err(1, __VA_ARGS__)
+# endif
+#endif
+
+/*
+ * Singly-linked List definitions.
+ */
+#define	SLIST_HEAD(name, type)						\
+struct name {								\
+	struct type *slh_first;	/* first element */			\
+}
+
+#define	SLIST_HEAD_INITIALIZER(head)					\
+	{ NULL }
+
+#define	SLIST_ENTRY(type)						\
+struct {								\
+	struct type *sle_next;	/* next element */			\
+}
+
+/*
+ * Singly-linked List access methods.
+ */
+#define	SLIST_FIRST(head)	((head)->slh_first)
+#define	SLIST_END(head)		NULL
+#define	SLIST_EMPTY(head)	((head)->slh_first == NULL)
+#define	SLIST_NEXT(elm, field)	((elm)->field.sle_next)
+
+#define	SLIST_FOREACH(var, head, field)					\
+	for((var) = (head)->slh_first;					\
+	    (var) != SLIST_END(head);					\
+	    (var) = (var)->field.sle_next)
+
+#define	SLIST_FOREACH_SAFE(var, head, field, tvar)			\
+	for ((var) = SLIST_FIRST((head));				\
+	    (var) != SLIST_END(head) &&					\
+	    ((tvar) = SLIST_NEXT((var), field), 1);			\
+	    (var) = (tvar))
+
+/*
+ * Singly-linked List functions.
+ */
+#define	SLIST_INIT(head) do {						\
+	(head)->slh_first = SLIST_END(head);				\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_INSERT_AFTER(slistelm, elm, field) do {			\
+	(elm)->field.sle_next = (slistelm)->field.sle_next;		\
+	(slistelm)->field.sle_next = (elm);				\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_INSERT_HEAD(head, elm, field) do {			\
+	(elm)->field.sle_next = (head)->slh_first;			\
+	(head)->slh_first = (elm);					\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_REMOVE_AFTER(slistelm, field) do {			\
+	(slistelm)->field.sle_next =					\
+	    SLIST_NEXT(SLIST_NEXT((slistelm), field), field);		\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_REMOVE_HEAD(head, field) do {				\
+	(head)->slh_first = (head)->slh_first->field.sle_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_REMOVE(head, elm, type, field) do {			\
+	if ((head)->slh_first == (elm)) {				\
+		SLIST_REMOVE_HEAD((head), field);			\
+	}								\
+	else {								\
+		struct type *curelm = (head)->slh_first;		\
+		while(curelm->field.sle_next != (elm))			\
+			curelm = curelm->field.sle_next;		\
+		curelm->field.sle_next =				\
+		    curelm->field.sle_next->field.sle_next;		\
+	}								\
+} while (/*CONSTCOND*/0)
+
+
+/*
+ * List definitions.
+ */
+#define	LIST_HEAD(name, type)						\
+struct name {								\
+	struct type *lh_first;	/* first element */			\
+}
+
+#define	LIST_HEAD_INITIALIZER(head)					\
+	{ NULL }
+
+#define	LIST_ENTRY(type)						\
+struct {								\
+	struct type *le_next;	/* next element */			\
+	struct type **le_prev;	/* address of previous next element */	\
+}
+
+/*
+ * List access methods.
+ */
+#define	LIST_FIRST(head)		((head)->lh_first)
+#define	LIST_END(head)			NULL
+#define	LIST_EMPTY(head)		((head)->lh_first == LIST_END(head))
+#define	LIST_NEXT(elm, field)		((elm)->field.le_next)
+
+#define	LIST_FOREACH(var, head, field)					\
+	for ((var) = ((head)->lh_first);				\
+	    (var) != LIST_END(head);					\
+	    (var) = ((var)->field.le_next))
+
+#define	LIST_FOREACH_SAFE(var, head, field, tvar)			\
+	for ((var) = LIST_FIRST((head));				\
+	    (var) != LIST_END(head) &&					\
+	    ((tvar) = LIST_NEXT((var), field), 1);			\
+	    (var) = (tvar))
+
+#define	LIST_MOVE(head1, head2, field) do {				\
+	LIST_INIT((head2));						\
+	if (!LIST_EMPTY((head1))) {					\
+		(head2)->lh_first = (head1)->lh_first;			\
+		(head2)->lh_first->field.le_prev = &(head2)->lh_first;	\
+		LIST_INIT((head1));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+/*
+ * List functions.
+ */
+#if defined(QUEUEDEBUG)
+#define	QUEUEDEBUG_LIST_INSERT_HEAD(head, elm, field)			\
+	if ((head)->lh_first &&						\
+	    (head)->lh_first->field.le_prev != &(head)->lh_first)	\
+		QUEUEDEBUG_ABORT("LIST_INSERT_HEAD %p %s:%d", (head),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_LIST_OP(elm, field)					\
+	if ((elm)->field.le_next &&					\
+	    (elm)->field.le_next->field.le_prev !=			\
+	    &(elm)->field.le_next)					\
+		QUEUEDEBUG_ABORT("LIST_* forw %p %s:%d", (elm),		\
+		    __FILE__, __LINE__);				\
+	if (*(elm)->field.le_prev != (elm))				\
+		QUEUEDEBUG_ABORT("LIST_* back %p %s:%d", (elm),		\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_LIST_POSTREMOVE(elm, field)				\
+	(elm)->field.le_next = (void *)1L;				\
+	(elm)->field.le_prev = (void *)1L;
+#else
+#define	QUEUEDEBUG_LIST_INSERT_HEAD(head, elm, field)
+#define	QUEUEDEBUG_LIST_OP(elm, field)
+#define	QUEUEDEBUG_LIST_POSTREMOVE(elm, field)
+#endif
+
+#define	LIST_INIT(head) do {						\
+	(head)->lh_first = LIST_END(head);				\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_INSERT_AFTER(listelm, elm, field) do {			\
+	QUEUEDEBUG_LIST_OP((listelm), field)				\
+	if (((elm)->field.le_next = (listelm)->field.le_next) != 	\
+	    LIST_END(head))						\
+		(listelm)->field.le_next->field.le_prev =		\
+		    &(elm)->field.le_next;				\
+	(listelm)->field.le_next = (elm);				\
+	(elm)->field.le_prev = &(listelm)->field.le_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_INSERT_BEFORE(listelm, elm, field) do {			\
+	QUEUEDEBUG_LIST_OP((listelm), field)				\
+	(elm)->field.le_prev = (listelm)->field.le_prev;		\
+	(elm)->field.le_next = (listelm);				\
+	*(listelm)->field.le_prev = (elm);				\
+	(listelm)->field.le_prev = &(elm)->field.le_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_INSERT_HEAD(head, elm, field) do {				\
+	QUEUEDEBUG_LIST_INSERT_HEAD((head), (elm), field)		\
+	if (((elm)->field.le_next = (head)->lh_first) != LIST_END(head))\
+		(head)->lh_first->field.le_prev = &(elm)->field.le_next;\
+	(head)->lh_first = (elm);					\
+	(elm)->field.le_prev = &(head)->lh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_REMOVE(elm, field) do {					\
+	QUEUEDEBUG_LIST_OP((elm), field)				\
+	if ((elm)->field.le_next != NULL)				\
+		(elm)->field.le_next->field.le_prev = 			\
+		    (elm)->field.le_prev;				\
+	*(elm)->field.le_prev = (elm)->field.le_next;			\
+	QUEUEDEBUG_LIST_POSTREMOVE((elm), field)			\
+} while (/*CONSTCOND*/0)
+
+#define LIST_REPLACE(elm, elm2, field) do {				\
+	if (((elm2)->field.le_next = (elm)->field.le_next) != NULL)	\
+		(elm2)->field.le_next->field.le_prev =			\
+		    &(elm2)->field.le_next;				\
+	(elm2)->field.le_prev = (elm)->field.le_prev;			\
+	*(elm2)->field.le_prev = (elm2);				\
+	QUEUEDEBUG_LIST_POSTREMOVE((elm), field)			\
+} while (/*CONSTCOND*/0)
+
+/*
+ * Simple queue definitions.
+ */
+#define	SIMPLEQ_HEAD(name, type)					\
+struct name {								\
+	struct type *sqh_first;	/* first element */			\
+	struct type **sqh_last;	/* addr of last next element */		\
+}
+
+#define	SIMPLEQ_HEAD_INITIALIZER(head)					\
+	{ NULL, &(head).sqh_first }
+
+#define	SIMPLEQ_ENTRY(type)						\
+struct {								\
+	struct type *sqe_next;	/* next element */			\
+}
+
+/*
+ * Simple queue access methods.
+ */
+#define	SIMPLEQ_FIRST(head)		((head)->sqh_first)
+#define	SIMPLEQ_END(head)		NULL
+#define	SIMPLEQ_EMPTY(head)		((head)->sqh_first == SIMPLEQ_END(head))
+#define	SIMPLEQ_NEXT(elm, field)	((elm)->field.sqe_next)
+
+#define	SIMPLEQ_FOREACH(var, head, field)				\
+	for ((var) = ((head)->sqh_first);				\
+	    (var) != SIMPLEQ_END(head);					\
+	    (var) = ((var)->field.sqe_next))
+
+#define	SIMPLEQ_FOREACH_SAFE(var, head, field, next)			\
+	for ((var) = ((head)->sqh_first);				\
+	    (var) != SIMPLEQ_END(head) &&				\
+	    ((next = ((var)->field.sqe_next)), 1);			\
+	    (var) = (next))
+
+/*
+ * Simple queue functions.
+ */
+#define	SIMPLEQ_INIT(head) do {						\
+	(head)->sqh_first = NULL;					\
+	(head)->sqh_last = &(head)->sqh_first;				\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_INSERT_HEAD(head, elm, field) do {			\
+	if (((elm)->field.sqe_next = (head)->sqh_first) == NULL)	\
+		(head)->sqh_last = &(elm)->field.sqe_next;		\
+	(head)->sqh_first = (elm);					\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_INSERT_TAIL(head, elm, field) do {			\
+	(elm)->field.sqe_next = NULL;					\
+	*(head)->sqh_last = (elm);					\
+	(head)->sqh_last = &(elm)->field.sqe_next;			\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_INSERT_AFTER(head, listelm, elm, field) do {		\
+	if (((elm)->field.sqe_next = (listelm)->field.sqe_next) == NULL)\
+		(head)->sqh_last = &(elm)->field.sqe_next;		\
+	(listelm)->field.sqe_next = (elm);				\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_REMOVE_HEAD(head, field) do {				\
+	if (((head)->sqh_first = (head)->sqh_first->field.sqe_next) == NULL) \
+		(head)->sqh_last = &(head)->sqh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define SIMPLEQ_REMOVE_AFTER(head, elm, field) do {			\
+	if (((elm)->field.sqe_next = (elm)->field.sqe_next->field.sqe_next) \
+	    == NULL)							\
+		(head)->sqh_last = &(elm)->field.sqe_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_REMOVE(head, elm, type, field) do {			\
+	if ((head)->sqh_first == (elm)) {				\
+		SIMPLEQ_REMOVE_HEAD((head), field);			\
+	} else {							\
+		struct type *curelm = (head)->sqh_first;		\
+		while (curelm->field.sqe_next != (elm))			\
+			curelm = curelm->field.sqe_next;		\
+		if ((curelm->field.sqe_next =				\
+			curelm->field.sqe_next->field.sqe_next) == NULL) \
+			    (head)->sqh_last = &(curelm)->field.sqe_next; \
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_CONCAT(head1, head2) do {				\
+	if (!SIMPLEQ_EMPTY((head2))) {					\
+		*(head1)->sqh_last = (head2)->sqh_first;		\
+		(head1)->sqh_last = (head2)->sqh_last;		\
+		SIMPLEQ_INIT((head2));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_LAST(head, type, field)					\
+	(SIMPLEQ_EMPTY((head)) ?						\
+		NULL :							\
+	        ((struct type *)(void *)				\
+		((char *)((head)->sqh_last) - offsetof(struct type, field))))
+
+/*
+ * Tail queue definitions.
+ */
+#define	_TAILQ_HEAD(name, type, qual)					\
+struct name {								\
+	qual type *tqh_first;		/* first element */		\
+	qual type *qual *tqh_last;	/* addr of last next element */	\
+}
+#define TAILQ_HEAD(name, type)	_TAILQ_HEAD(name, struct type,)
+
+#define	TAILQ_HEAD_INITIALIZER(head)					\
+	{ TAILQ_END(head), &(head).tqh_first }
+
+#define	_TAILQ_ENTRY(type, qual)					\
+struct {								\
+	qual type *tqe_next;		/* next element */		\
+	qual type *qual *tqe_prev;	/* address of previous next element */\
+}
+#define TAILQ_ENTRY(type)	_TAILQ_ENTRY(struct type,)
+
+/*
+ * Tail queue access methods.
+ */
+#define	TAILQ_FIRST(head)		((head)->tqh_first)
+#define	TAILQ_END(head)			(NULL)
+#define	TAILQ_NEXT(elm, field)		((elm)->field.tqe_next)
+#define	TAILQ_LAST(head, headname) \
+	(*(((struct headname *)(void *)((head)->tqh_last))->tqh_last))
+#define	TAILQ_PREV(elm, headname, field) \
+	(*(((struct headname *)(void *)((elm)->field.tqe_prev))->tqh_last))
+#define	TAILQ_EMPTY(head)		(TAILQ_FIRST(head) == TAILQ_END(head))
+
+
+#define	TAILQ_FOREACH(var, head, field)					\
+	for ((var) = ((head)->tqh_first);				\
+	    (var) != TAILQ_END(head);					\
+	    (var) = ((var)->field.tqe_next))
+
+#define	TAILQ_FOREACH_SAFE(var, head, field, next)			\
+	for ((var) = ((head)->tqh_first);				\
+	    (var) != TAILQ_END(head) &&					\
+	    ((next) = TAILQ_NEXT(var, field), 1); (var) = (next))
+
+#define	TAILQ_FOREACH_REVERSE(var, head, headname, field)		\
+	for ((var) = TAILQ_LAST((head), headname);			\
+	    (var) != TAILQ_END(head);					\
+	    (var) = TAILQ_PREV((var), headname, field))
+
+#define	TAILQ_FOREACH_REVERSE_SAFE(var, head, headname, field, prev)	\
+	for ((var) = TAILQ_LAST((head), headname);			\
+	    (var) != TAILQ_END(head) && 				\
+	    ((prev) = TAILQ_PREV((var), headname, field), 1); (var) = (prev))
+
+/*
+ * Tail queue functions.
+ */
+#if defined(QUEUEDEBUG)
+#define	QUEUEDEBUG_TAILQ_INSERT_HEAD(head, elm, field)			\
+	if ((head)->tqh_first &&					\
+	    (head)->tqh_first->field.tqe_prev != &(head)->tqh_first)	\
+		QUEUEDEBUG_ABORT("TAILQ_INSERT_HEAD %p %s:%d", (head),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_INSERT_TAIL(head, elm, field)			\
+	if (*(head)->tqh_last != NULL)					\
+		QUEUEDEBUG_ABORT("TAILQ_INSERT_TAIL %p %s:%d", (head),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_OP(elm, field)					\
+	if ((elm)->field.tqe_next &&					\
+	    (elm)->field.tqe_next->field.tqe_prev !=			\
+	    &(elm)->field.tqe_next)					\
+		QUEUEDEBUG_ABORT("TAILQ_* forw %p %s:%d", (elm),	\
+		    __FILE__, __LINE__);				\
+	if (*(elm)->field.tqe_prev != (elm))				\
+		QUEUEDEBUG_ABORT("TAILQ_* back %p %s:%d", (elm),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_PREREMOVE(head, elm, field)			\
+	if ((elm)->field.tqe_next == NULL &&				\
+	    (head)->tqh_last != &(elm)->field.tqe_next)			\
+		QUEUEDEBUG_ABORT("TAILQ_PREREMOVE head %p elm %p %s:%d",\
+		    (head), (elm), __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_POSTREMOVE(elm, field)				\
+	(elm)->field.tqe_next = (void *)1L;				\
+	(elm)->field.tqe_prev = (void *)1L;
+#else
+#define	QUEUEDEBUG_TAILQ_INSERT_HEAD(head, elm, field)
+#define	QUEUEDEBUG_TAILQ_INSERT_TAIL(head, elm, field)
+#define	QUEUEDEBUG_TAILQ_OP(elm, field)
+#define	QUEUEDEBUG_TAILQ_PREREMOVE(head, elm, field)
+#define	QUEUEDEBUG_TAILQ_POSTREMOVE(elm, field)
+#endif
+
+#define	TAILQ_INIT(head) do {						\
+	(head)->tqh_first = TAILQ_END(head);				\
+	(head)->tqh_last = &(head)->tqh_first;				\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_HEAD(head, elm, field) do {			\
+	QUEUEDEBUG_TAILQ_INSERT_HEAD((head), (elm), field)		\
+	if (((elm)->field.tqe_next = (head)->tqh_first) != TAILQ_END(head))\
+		(head)->tqh_first->field.tqe_prev =			\
+		    &(elm)->field.tqe_next;				\
+	else								\
+		(head)->tqh_last = &(elm)->field.tqe_next;		\
+	(head)->tqh_first = (elm);					\
+	(elm)->field.tqe_prev = &(head)->tqh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_TAIL(head, elm, field) do {			\
+	QUEUEDEBUG_TAILQ_INSERT_TAIL((head), (elm), field)		\
+	(elm)->field.tqe_next = TAILQ_END(head);			\
+	(elm)->field.tqe_prev = (head)->tqh_last;			\
+	*(head)->tqh_last = (elm);					\
+	(head)->tqh_last = &(elm)->field.tqe_next;			\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_AFTER(head, listelm, elm, field) do {		\
+	QUEUEDEBUG_TAILQ_OP((listelm), field)				\
+	if (((elm)->field.tqe_next = (listelm)->field.tqe_next) != 	\
+	    TAILQ_END(head))						\
+		(elm)->field.tqe_next->field.tqe_prev = 		\
+		    &(elm)->field.tqe_next;				\
+	else								\
+		(head)->tqh_last = &(elm)->field.tqe_next;		\
+	(listelm)->field.tqe_next = (elm);				\
+	(elm)->field.tqe_prev = &(listelm)->field.tqe_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_BEFORE(listelm, elm, field) do {			\
+	QUEUEDEBUG_TAILQ_OP((listelm), field)				\
+	(elm)->field.tqe_prev = (listelm)->field.tqe_prev;		\
+	(elm)->field.tqe_next = (listelm);				\
+	*(listelm)->field.tqe_prev = (elm);				\
+	(listelm)->field.tqe_prev = &(elm)->field.tqe_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_REMOVE(head, elm, field) do {				\
+	QUEUEDEBUG_TAILQ_PREREMOVE((head), (elm), field)		\
+	QUEUEDEBUG_TAILQ_OP((elm), field)				\
+	if (((elm)->field.tqe_next) != TAILQ_END(head))			\
+		(elm)->field.tqe_next->field.tqe_prev = 		\
+		    (elm)->field.tqe_prev;				\
+	else								\
+		(head)->tqh_last = (elm)->field.tqe_prev;		\
+	*(elm)->field.tqe_prev = (elm)->field.tqe_next;			\
+	QUEUEDEBUG_TAILQ_POSTREMOVE((elm), field);			\
+} while (/*CONSTCOND*/0)
+
+#define TAILQ_REPLACE(head, elm, elm2, field) do {			\
+        if (((elm2)->field.tqe_next = (elm)->field.tqe_next) != 	\
+	    TAILQ_END(head))   						\
+                (elm2)->field.tqe_next->field.tqe_prev =		\
+                    &(elm2)->field.tqe_next;				\
+        else								\
+                (head)->tqh_last = &(elm2)->field.tqe_next;		\
+        (elm2)->field.tqe_prev = (elm)->field.tqe_prev;			\
+        *(elm2)->field.tqe_prev = (elm2);				\
+	QUEUEDEBUG_TAILQ_POSTREMOVE((elm), field);			\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_CONCAT(head1, head2, field) do {				\
+	if (!TAILQ_EMPTY(head2)) {					\
+		*(head1)->tqh_last = (head2)->tqh_first;		\
+		(head2)->tqh_first->field.tqe_prev = (head1)->tqh_last;	\
+		(head1)->tqh_last = (head2)->tqh_last;			\
+		TAILQ_INIT((head2));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+/*
+ * Singly-linked Tail queue declarations.
+ */
+#define	STAILQ_HEAD(name, type)						\
+struct name {								\
+	struct type *stqh_first;	/* first element */		\
+	struct type **stqh_last;	/* addr of last next element */	\
+}
+
+#define	STAILQ_HEAD_INITIALIZER(head)					\
+	{ NULL, &(head).stqh_first }
+
+#define	STAILQ_ENTRY(type)						\
+struct {								\
+	struct type *stqe_next;	/* next element */			\
+}
+
+/*
+ * Singly-linked Tail queue access methods.
+ */
+#define	STAILQ_FIRST(head)	((head)->stqh_first)
+#define	STAILQ_END(head)	NULL
+#define	STAILQ_NEXT(elm, field)	((elm)->field.stqe_next)
+#define	STAILQ_EMPTY(head)	(STAILQ_FIRST(head) == STAILQ_END(head))
+
+/*
+ * Singly-linked Tail queue functions.
+ */
+#define	STAILQ_INIT(head) do {						\
+	(head)->stqh_first = NULL;					\
+	(head)->stqh_last = &(head)->stqh_first;				\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_INSERT_HEAD(head, elm, field) do {			\
+	if (((elm)->field.stqe_next = (head)->stqh_first) == NULL)	\
+		(head)->stqh_last = &(elm)->field.stqe_next;		\
+	(head)->stqh_first = (elm);					\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_INSERT_TAIL(head, elm, field) do {			\
+	(elm)->field.stqe_next = NULL;					\
+	*(head)->stqh_last = (elm);					\
+	(head)->stqh_last = &(elm)->field.stqe_next;			\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_INSERT_AFTER(head, listelm, elm, field) do {		\
+	if (((elm)->field.stqe_next = (listelm)->field.stqe_next) == NULL)\
+		(head)->stqh_last = &(elm)->field.stqe_next;		\
+	(listelm)->field.stqe_next = (elm);				\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_REMOVE_HEAD(head, field) do {				\
+	if (((head)->stqh_first = (head)->stqh_first->field.stqe_next) == NULL) \
+		(head)->stqh_last = &(head)->stqh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_REMOVE(head, elm, type, field) do {			\
+	if ((head)->stqh_first == (elm)) {				\
+		STAILQ_REMOVE_HEAD((head), field);			\
+	} else {							\
+		struct type *curelm = (head)->stqh_first;		\
+		while (curelm->field.stqe_next != (elm))			\
+			curelm = curelm->field.stqe_next;		\
+		if ((curelm->field.stqe_next =				\
+			curelm->field.stqe_next->field.stqe_next) == NULL) \
+			    (head)->stqh_last = &(curelm)->field.stqe_next; \
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_FOREACH(var, head, field)				\
+	for ((var) = ((head)->stqh_first);				\
+		(var);							\
+		(var) = ((var)->field.stqe_next))
+
+#define	STAILQ_FOREACH_SAFE(var, head, field, tvar)			\
+	for ((var) = STAILQ_FIRST((head));				\
+	    (var) && ((tvar) = STAILQ_NEXT((var), field), 1);		\
+	    (var) = (tvar))
+
+#define	STAILQ_CONCAT(head1, head2) do {				\
+	if (!STAILQ_EMPTY((head2))) {					\
+		*(head1)->stqh_last = (head2)->stqh_first;		\
+		(head1)->stqh_last = (head2)->stqh_last;		\
+		STAILQ_INIT((head2));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_LAST(head, type, field)					\
+	(STAILQ_EMPTY((head)) ?						\
+		NULL :							\
+	        ((struct type *)(void *)				\
+		((char *)((head)->stqh_last) - offsetof(struct type, field))))
+
+#endif	/* !_SYS_QUEUE_H_ */



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?85o7icapb8.fsf>