Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Nov 1999 13:17:00 -0800 (PST)
From:      Kris Kennaway <kris@hub.freebsd.org>
To:        Dan Moschuk <dan@FreeBSD.ORG>
Cc:        Bruce Evans <bde@zeta.org.au>, Mike Smith <msmith@FreeBSD.ORG>, audit@FreeBSD.ORG, Warner Losh <imp@village.org>
Subject:   Re: cvs commit: src/sys/i386/conf files.i386 src/sys/kern kern_fork.c src/sys/libkern arc4random.c src/sys/sys libkern.h
Message-ID:  <Pine.BSF.4.21.9911291304160.51314-100000@hub.freebsd.org>
In-Reply-To: <19991129153250.A2999@spirit.jaded.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 29 Nov 1999, Dan Moschuk wrote:

> | Yep - the one in the Linux kernel is 1.06 or so of the same code (we have
> | 0.95, OpenBSD 1.00). OpenBSD have essentially welded arc4random() to the
> | output of read_random for their /dev/arandom, whereas we just hash
> | whatever we can get from the entropy pool (possibly nothing) with MD5
> | until we fill the buffer, for /dev/urandom (/dev/random is just the MD5
> | hash of as much entropy as is present in both cases).
> 
> Hashing is done for good reason; if we expose our internal state through
> random numbers, they are possible to predict.  Running the data through
> MD5 reduces this risk.

Yes.

> | It's been a while since I checked, but I think in Linux they (perhaps
> | gratuitiously) use SHA1 instead of MD5. It looks like there have been some
> | changes in the entropy-stirring and extraction mechanism in the underlying
> | code, though, so it's probably worthwhile updating. Whether the arandom
> | method is better than urandom is I guess open for debate :-)
> 
> SHA1 generates a bigger hash than MD5, so for that reason it is probably
> worth switching to.  However...

Immaterial. SHA1 is like a bucket which can be filled with up to 160 bits
of entropy, so we feed it up to 160 bits at a time from the pool. MD5 is a
smaller bucket, so we feed it chunks of 128 bits. The issue in the Linux
case was presumably concerns about the strength of the hash function
itself (I don't think they're warranted enough to replace MD5 with a
much slower algorithm)..

> | I don't know what Theodore Ts'o's credentials are, but I'm still much more
> | inclined to trust the work of someone who does this stuff for a living
> | than a part-time cryptographer. AFAIK no professional cryptographers have
> | taken a serious look at "our" (Linux/Open/FreeBSD) PRNG and the effects
> | of any random twiddles people may have done to them over time.
> 
> ... I have to agree with you here.  If we were to pit Theodore Ts'o's routine
> against the possibility of using Yarrow, I would choose Yarrow.  Just because
> OpenBSD uses this, doesn't mean we have to.  In fact, ideally what I would
> like to see is this:

wrt the OpenBSD /dev/arandom algorithm (their equivalent of our
/dev/urandom), I did some thinking over lunch and came to the conclusion
that it's actually worse than our current one:

arc4random() (their implementation, and the one we have in libc) reseeds
itself based on the contents of the entropy pool every 128 accesses. This
means that if we break the state of the PRNG, we get on average 64 free
"random" numbers with perfect certainty, and if furthermore we are
aggressively draining the /dev/random entropy pool (which /dev/arandom
reseeds itself from) then we know the state of the PRNG with probabilistic
certainty into the indefinite future.

Contrast our algorithm, which effectively reseeds itself every access. If
we break the state, we get 0 free accesses, and probabilistically
thereafter, but with a factor of 128 shorter decay time. The downside is
that we use up our entropy faster (someone should really do some
measurements as to how fast entropy is actually generated on a typical PC)
and the algorithm is slower (MD5 vs arc4).

Yarrow, (as I recall, it's been a while since I looked at it) compensates
for this by keeping two entropy pools; one public, and one private, which
we reseed from. The attacker can only drain the public pool, which
doesn't affect the future state of the PRNG. I don't know about the
overall speed of Yarrow relative to the other two.

This seems to imply we shouldn't blindly use arc4random() in every case as
OpenBSD have done, but neither should we always use read_random() unless
we've got lots of entropy to play with and the speed doesn't matter. I
hate grey areas :-)

> i) Yarrow (or possibly something else should your research yield Yarrow as
>    ``unsafe'') routines built into the kernel.
> 
> ii) Replace random() with yarrow_random() in all instances
> 
> iii) Replace /dev/*random with routines from Yarrow.
> 
> Indeed this is a little bit of work, but anything that allows me to further
> put off NFS locking is OK with me. :-)

This is probably a good plan of attack. On the other hand, our current
/dev/random is probably quite "good enough" for now, and there are other
things we can fix in the mean time with much greater benefits (like all
those pesky buffer overflows :-)

Kris




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-audit" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.9911291304160.51314-100000>