Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 23 Sep 1997 00:30:25 -0600
From:      "Justin T. Gibbs" <gibbs@plutotech.com>
To:        Terry Lambert <tlambert@primenet.com>
Cc:        gibbs@plutotech.com (Justin T. Gibbs), nate@mt.sri.com, bde@zeta.org.au, current@FreeBSD.ORG
Subject:   Re: cvs commit: src/sys/conf files src/sys/dev/vx if_vx.c if_vxreg.h src/sys/i386/apm apm.c src/sys/i386/conf GENERIC files.i386 
Message-ID:  <199709230630.AAA13505@pluto.plutotech.com>
In-Reply-To: Your message of "Tue, 23 Sep 1997 04:15:49 -0000." <199709230415.VAA07549@usr08.primenet.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
>> >So the question to as is "how many softclocks does the average timer
>> >entry live across?".
>> 
>> Go read the code.
>
>This one was less rhetorical, and required an empirical judgement on
>your part to get the same conclusion I had already made.  It's called
>a Socratic Dialectic.

Bullshit.  You stated that every entry would be touched on each softtick.
This simply isn't how it works and would have been obvious to you if you
had either read the paper or read the code.  Of course, now that you've
read this piece of mail that explains to you how it works, you pretend
that the misunderstanding was on my part and that I don't seem to be able
to understand your plain english.  I love it!

>The real answer is "it depends on what timer
>intervals you use and with what frequency a timer is dequeud before it
>fires" to answer the question.  It seems the answer is "a given entry
>is traversed by many softclock() events; how many depends on the
>expected duration of the entry divided by the number of hash buckets".

Which is pretty obvious.

>For 10 users, this is much less than 256.

Go read the code and also go really read my posts on this thread.  Did
you miss the message where I showed that the number of callouts allocated
for a 10 user system is 196?  Or are we supposed to believe that 196
is much less than 256?  Did you also miss the fact that this is a power
of 2 hash table so that even on a 10 user system the size is rounded
up to 256?

>You still haven't stated how
>amny timers are expected to be enqueued in all buckets, on average, when
>a softclock hits, in common usage.

Sure I have.  On each softclock, the number of timers enqueued in
all buckets is the number of callouts outstanding which I've said
is around 30 for a typical workstation.  Of course you probably
meant to ask what the average length of a hash chain is and to
answer that question for the loads I'm interested in, I'd have to
instrument the code.  I expect it to be h where h << n and n is
the total number of callouts outstanding.

>> An entry will only have it's count decremented if, during it's lifetime,
>> softclock traverses the list of callouts in it's hash bucket.  It may
>> take numerous calls to softclock before it touches the bucket where
>> a particular callout is placed.
>
>Nate's point was that this assumes one entry per hash bucket, on average.

No it doesn't.  For example, here is a 4 bucket call wheel:

	1   2   3   4
	+   +   +   +
	x   a   b   d
	+       +   +
	y       c   f
	+       +
	z       e

Lets say that softclock is currently at bucket 2.  When will entries
x, y, and z next be referenced?  During the call to softclock 3 ticks
in the future.  If any of x, y, or z are removed before that time,
they will not be referenced.  The average hash chain length in this
callwheel is a little over 2.

>For the default of 10 users in GENERIC, I think this is a bad assumption.
>Even for 32 users (256 buckets), it's stretching things if the timers
>see any real use at all.

The hash table size is still larger than the total number of callouts
that can be outstanding.

>> >Queue the entry for ordered insertion when timeout() is called, but
>> >delay the ordered insertion until softclock().
>> 
>> Sounds like it will take additional space.
>
>No.  It will take two additional pointers and a trivial amount of
>code identical to the code you would need anyway.

Two pointers is additional space.  It also makes softclock become
q * O(h) + O(c) where q is the number of defered insertion entries,
h is the hash chain length and c is the number of empty buckets
that are traversed to find the next timeout for scheduling the next
one shot.  Then there is the additional code in softclock to ensure
that none of the timeouts queued while softclock is running are
supposed to expire in the current tick which you would need to
check every time softclock drops from splhigh to let other interrupts
run as there is nothing to prevent a timeout handler or interrupt
handler from scheduling such an entry.  Sure, it's do-able, but
the current implementation doesn't have any of this mess and still
gets the O(1) insertion and softclock pays a single O(h).

>					Regards,
>					Terry Lambert
>					terry@lambert.org
>---
>Any opinions in this posting are my own and not those of my present
>or previous employers.
>

--
Justin T. Gibbs
===========================================
  FreeBSD: Turning PCs into workstations
===========================================



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199709230630.AAA13505>