Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Apr 1997 16:03:38 +0200
From:      Stefan Esser <se@freebsd.org>
To:        Don Lewis <Don.Lewis@tsc.tdk.com>
Cc:        dg@root.com, Poul-Henning Kamp <phk@dk.tfs.com>, Michael Hancock <michaelh@cet.co.jp>, fs@freebsd.org
Subject:   Re: the namei cache...
Message-ID:  <19970424160338.57282@x14.mi.uni-koeln.de>
In-Reply-To: <199704241256.FAA11674@salsa.gv.tsc.tdk.com>; from Don Lewis on Thu, Apr 24, 1997 at 05:56:01AM -0700
References:  <dg@root.com> <199704241256.FAA11674@salsa.gv.tsc.tdk.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Apr 24, Don Lewis <Don.Lewis@tsc.tdk.com> wrote:
> Yes, most processors are really bad at integer divide.  You might look at
> the Perl hash.  You multiply the current value by 33 and add the next
> character, then mask off however many bits you want at the end (the table
> size must be a power of 2).  If your compiler is at all smart, it will
> notice that the multiply is just a shift and an add.  I suspect this might
> be pretty easy to just drop in to the existing code for a quick trial.

I used the following as a hash function, after looking up
algorithms in Knuth's book and studying lots of examples
in several software packages. I needed a fast hash, which
should work on power of 2 size tables, with keys mostly
being ASCII text. I did quite some profiling and testing
with my data sets, and the following gave good results:

#define SHVAL		(8)
#define MULTIPLIER	(0x5425daf1)

unsigned long 
hashf (unsigned long hashmax, void *key, size_t keylen)
{
    unsigned long i = 0;
    unsigned char *p = key;
    while (keylen--)
    {
	unsigned char c = *p++;
	i -= (i << 4) ^ c;
/*      i -= (i << 4) + (i >> 23) ^ c;*/
    }
    i *= MULTIPLIER;
    i >>= SHVAL;
    return i & hashmax;
}

This lead to significantly less collisions than other,
simpler hash functions. I tried other versions that fed
back some of the high order bits, but surprisingly found
that this did not help reduce the number of collisions,
and thus was not worth the additional shift and add.
But this is possibly not true in the case of file names
that tend to have a common suffix (.tar.gz, for example).

(The value of hashmax must obviously be a power of 2
minus 1 ...)

I used tables from a few thousand up to a few hundred
thousand entries, BTW.

Regards, STefan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19970424160338.57282>