Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 23 Sep 1997 10:53:11 -0600
From:      "Justin T. Gibbs" <gibbs@plutotech.com>
To:        Terry Lambert <tlambert@primenet.com>
Cc:        gibbs@plutotech.com (Justin T. Gibbs), nate@mt.sri.com, bde@zeta.org.au, current@FreeBSD.ORG
Subject:   Re: cvs commit: src/sys/conf files src/sys/dev/vx if_vx.c if_vxreg.h src/sys/i386/apm apm.c src/sys/i386/conf GENERIC files.i386 
Message-ID:  <199709231653.KAA24904@pluto.plutotech.com>
In-Reply-To: Your message of "Tue, 23 Sep 1997 16:13:06 -0000." <199709231613.JAA09972@usr01.primenet.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
>> Bullshit.  You stated that every entry would be touched on each softtick.
>
>No I did not.  Look at my sentence you quote in your response:
>
>> >> >So the question to as is "how many softclocks does the average timer
>> >> >entry live across?".

Actually lets look at the full context:

>>For timeouts that live across softclock calls, as Nate is suggesting,
>>the list is traversed and the entry counts against the implementation
>>based on how many softclocks it lives across, on average.  For ordered
>>insertion, the entry is counted against, once, at insertion time.
>>   
>>So the question to as is "how many softclocks does the average timer
>>entry live across?".

>This has nothing to do with claiming that every entry would be touched
>on every softclock.  It has to do with an entry living, on average, 8
>or more softclocks.

That's not how I read it.  And where do you come up with 8?

>This is being generous, since it accepts your number of users (32) instead
>of the GENERIC number of users (10) to obtain a total of 256 buckets.

A MAXUSERS of 32 creates a callwheel size of 1024.  A MAXUSERS of 10
creates a callwheel size of 256.

>> Sure I have.  On each softclock, the number of timers enqueued in
>> all buckets is the number of callouts outstanding which I've said
>> is around 30 for a typical workstation.  Of course you probably
>> meant to ask what the average length of a hash chain is and to
>> answer that question for the loads I'm interested in, I'd have to
>> instrument the code.  I expect it to be h where h << n and n is
>> the total number of callouts outstanding.
>
>Well duh, or it wouldn't be a hash; the actual *average* would be
>256/30 assuming a *perfect* hash.

How can my average be 256/30 if 256 * 256 / 30 != 30?

>If an entry lives greater than 8.53 ticks, it will get hit twice.
>On average, for the algorithm to be less than O(n), any single entry
>must live 8 or less softclock ticks.

You're digging a hole Terry.  An entry must live for callwheel size
ticks in order to be touched twice during it's lifetime.

>> Lets say that softclock is currently at bucket 2.  When will entries
>> x, y, and z next be referenced?  During the call to softclock 3 ticks
>> in the future.  If any of x, y, or z are removed before that time,
>> they will not be referenced.  The average hash chain length in this
>> callwheel is a little over 2.
>
>2.25.  Even I can accurately divide 9 by 4.

Wow!  Of course the point is that in the example, the hash chain length
is greater than 1 which nullifies your assertion.

>You still haven't answered the question about the expected average
>lifetime of an entry.

Yes I have.  I said earlier that I expect most callouts to live only for
one or two ticks for my application.

>Is an entry expected to be hit by one or more softclocks in its
>lifetime?  *THAT* is the question.  The x in O(x) is the average
>number of times that any given entry can expect to be hit by a
>softclock.

I already answered this.

>> >For the default of 10 users in GENERIC, I think this is a bad assumption.
>> >Even for 32 users (256 buckets), it's stretching things if the timers
>> >see any real use at all.
>> 
>> The hash table size is still larger than the total number of callouts
>> that can be outstanding.
>
>Do the callouts that are outstanding live more than 8 ticks?

The question should be, do they live longer than 256 ticks.

>> >No.  It will take two additional pointers and a trivial amount of
>> >code identical to the code you would need anyway.
>> 
>> Two pointers is additional space.
>
>Now who is really picking nits?  It's a hell of a lot less space
>than you are already using, and it's statically allocated.

It depends.  I don't think that using only two pointers would be the
right way to implement that algorithm.  See below.

>> It also makes softclock become q * O(h) + O(c) where q is the number
>> of defered insertion entries, h is the hash chain length and c is
>> the number of empty buckets that are traversed to find the next
>> timeout for scheduling the next one shot.
>
>This calculation is incorrect, since I am suggesting not using a hash
>in the case of an ordered list.

So you want softclock to become q * O(n) and ditch the hash table.

I think I'd rather pay the space to have an unordered queue for each
head in the hash table which means that only the current bucket and
the next non-empty bucket must have it's entries insertion sorted
in order to schedule the next one shot instead of all new entries that
have come in since the last softclock regardless of how they would be
distributed.  This is more true to your desire to perform lazy
scheduling of timeouts.

>If you will remember, this was a compromise for your complaint that
>insertion at interrupt was not O(1).  You weren't worried about time
>in softclock until I figured out how to move the overhead there.  Even
>so, you're still not worried about determinism in softclock; one
>indeterminate value is as good as any other.

I'm not worried about time in softclock as long as it isn't unnecessarily
excessive (O(n) is excessive) and it drops from splhigh at deterministic
intervals.

>> Then there is the additional code in softclock to ensure
>> that none of the timeouts queued while softclock is running are
>> supposed to expire in the current tick which you would need to
>> check every time softclock drops from splhigh to let other interrupts
>> run as there is nothing to prevent a timeout handler or interrupt
>> handler from scheduling such an entry.
>
>Softclock need only check the tick values from the head, and may stop
>when it hits the first entry whose tick value exceeds the current
>tick value; remember, the list under discussion is ordered?

But the calls to timeout and untimeout during softclock execution would
not be putting those entries into the ordered list; remember, we're
delaying all ordered insertion until softclock runs.

>You were also the one that argued that insertion and deletion were
>relatively rare.

I argued exactly the opposite.  Insertion and deletion are the common
case, expiration is rare.

>					Terry Lambert
>					terry@lambert.org
>---
>Any opinions in this posting are my own and not those of my present
>or previous employers.

--
Justin T. Gibbs
===========================================
  FreeBSD: Turning PCs into workstations
===========================================





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199709231653.KAA24904>