Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 05 Dec 1996 17:21:31 -0700
From:      Steve Passe <smp@csn.net>
To:        Thomas Pfenning <thomaspf@microsoft.com>
Cc:        "'Chris Csanady'" <ccsanady@friley216.res.iastate.edu>, "'Peter Wemm'" <peter@spinner.dialix.com>, "'smp@freebsd.org'" <smp@freebsd.org>
Subject:   Re: make locking more generic? 
Message-ID:  <199612060021.RAA16023@clem.systemsix.com>
In-Reply-To: Your message of "Thu, 05 Dec 1996 10:55:33 PST." <c=US%a=_%p=msft%l=RED-81-MSG-961205185533Z-4388@INET-02-IMC.microsoft.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

> Doesn't the lock version in this mail actual trash the cached value for
> bootlock on every spin? What about using MCS queueing locks to solve
> both the cache trashing and the reentrance at the same time. 
 ...
> >>ENTRY(boot_lock)
> >>	/* This is the Intel reccomended semaphore method */
> >>	movb	$0xff, %al
> >>2:
> >>	xchgb	%al, bootlock		/* xchg is implicitly locked */
> >>	cmpb	$0xff, %al
> >>	jz	2b
> >>	ret
> >>
> >>ENTRY(boot_unlock)
> >>	movb	$0, %al
> >>	xchgb	%al, bootlock		/* xchg is implicitly locked */
> >>	ret
> >>
> >>	/* initial value is 0xff, or "busy" */
> >>	.globl	bootlock
> >>bootlock:	.byte 0xff

no,

 %al has 0xff in it,
 the initial locked value is 0xff
 each spin puts %al (ie 0xff) into bootlock and bootlock into %al
 so bootlock remains 0xff until the unlock function is called

 the unlock functiom puts 0x00 into %al, then does the xchgb, putting
 the 0x00 into bootlock, and tossing the contents of %al as it returns.

 so the next spin on the lock by ONE AP puts %al (0xff everytime) into
 bootlock, and gets %al filled with the 0x00 placed there by unlock.

 the other APs get the 0xff placed there by the one successful xchgb
 that the 1st AP made.

--
Steve Passe	| powered by
smp@csn.net	|            FreeBSD




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199612060021.RAA16023>