Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 26 Aug 2000 15:32:38 -0500
From:      Adam Back <adam@cypherspace.org>
To:        mark@grondar.za
Cc:        current@freebsd.org, kris@freebsd.org, jeroen@vangelderen.org
Subject:   Re: yarrow & /dev/random
Message-ID:  <200008262032.PAA05849@cypherspace.org>
In-Reply-To: <200008261533.e7QFXUp25804@grimreaper.grondar.za> (message from Mark Murray on Sat, 26 Aug 2000 17:33:29 %2B0200)

next in thread | previous in thread | raw e-mail | index | archive | help

Mark writes:
> > You really can't use yarrow to implement /dev/random as it is.  
> > [...]
> 
> OK; what then? The existing MD5 based system is very attackable, and
> protects itself very poorly.

My argument for linux is leave it as it is, and see if we can persuade
the yarrow authors to change yarrow so it does export a /dev/random
compatible API.  

Isn't freeBSD using the same Ted T'so code?  It's "good enough" IMO
that there is no rush to change it until we can preserve it's API
semantics.  The linux version has been switched to SHA1, though IMO
Dobbertin's pseudo collision attack on MD5 isn't broken in any
practical way for this purpose.  People are just moving away from MD5
in case someone manages to extend this attacks as conservative design.

> My approach to this (and this is the first time I am stating this in
> such a wide forum) is to provide another device (say /dev/srandom) for
> folk who want to do their own randomness processing. This would provide
> a structure of data including the entropy, the system nanotime and the
> source, so the user can do his own hard work when he needs to, and the
> folk simply needing a stream of "good" random numbers can do do from
> /dev/random.

You don't want people to have work hard -- they just want to retain
the /dev/random API which works and has understood semantics.

> > However it is not fair to impose that view on someone.  People can
> > have legitmate reasons to need more entropy.  Another very concrete
> > example is: say someone is using a yarrow-160 (3DES and SHA1)
> > implementation and they want to use an AES cipher with a 256 bit key
> > -- without the /dev/random API, you can't get 256 bit security, with
> > it you can.
> 
> Sooner or later someone is going to come up with a requirement for
> M-bit randoness on Yarrow-N, where M > N. What then?

You could use the /dev/random API if your entropy requirements are
greater than the output size of /dev/urandom (implemented with yarrow
or otherwise).  With the API we could add a call to ask the device
what it's output block size is.  And/or we could define a value
exported from random.h for the bit strength of /dev/urandom, though
that risks missing changes over time.

> > OTPs and some algorithms offer information theoretic security, or you
> > may be using a larger key space symmetric construct than the yarrow
> > output size (using 256 doesn't solve that -- then they want a 512 bit
> > key).  Worse people may already be using /dev/random for these
> > assumptions, so you risk breaking existing code's security by
> > replacing /dev/random with yarrow.
> 
> I can't help broken applications, but if we provide a better API, and
> get folk to use it, that everyone wins.

The applications aren't broken.  They are using the advertised
/dev/random API, and some people are proposing to pull the rug out of
under them and effectively symlink /dev/random to /dev/urandom.

As they may have relied on better than /dev/urandom for security, you
may break the security of their application.

> > > If I construct a specific hash function, is this still a problem?
> > 
> > No.  Note my other comments on list about CBC-MAC are confused -- I
> > misread your code.  It appears to be a keyless hash function, and as
> > Joeren noted it has some similarities to Davies-Meyer, but it's not
> > quite the same for the reasons he noted.
> 
> I fixed that and made it Davies-Meyer.
> http://people.freebsd.org/~markm/randomdev.patch

Looks good.  API comment: you might want a hash_final implemented as a
memcpy because some hashes you swap in have this phase (MD5, SHA1 mix
in the length as a last step).  Also other hash APIs often don't have
an hash_init which takes the magic constants as an argument.  As
you're not using any constants (and relying on 0 as the magic constant
/ Davies-Meyer IV) you could remove that.  Then you'd have the classic
MD5 API which would make plugging other hashes in easy.

Crypto construct-wise I don't think you can treat BF-CBC of a 256 bit
plaintext with a 256 bit key as a virtual 256 bit block cipher
operation.  I suspect the result will be weaker than 256 bits because
of the internal structure of BF-CBC.  

If you want 256 bit hash (and it is desirable for AES) you could use
what Joeren suggested: abrest Davies-Meyer, and a 128 bit block
cipher.  Or we could wait for the AES hash mode.

Twofish in abrest Davies-Meyer mode is going to blow away BF-CBC-256
pseudo 256 bit block cipher Davies-Meyer performance wise, because of
the key agility.

> > | So given that, it doesn't seem quite fair to pull the rug from under
> > | /dev/random users and replace it with a PRNG with quite different
> > | security assumptions.  Users would have similar reasons to be upset if
> > | someone removed their /dev/random and symlinked it to /dev/urandom.
> 
> ...unless we can somehow get /dev/random to be "secure enough".

I think we have an obligation to attempt to make it no less secure
than the current /dev/random; and of course we should try to make it
as secure as we can in general.  See below for my ideas of how you
might do that.

> > and after more arguments, more formally argued:
> :
> :
> > | Even if I have a mechanism to wait for a reseed after each output and
> > | reserve that output for me, I get at best R*2^160 bits for R reseeds,
> > | rather than the 2^{R*160} bits I wanted.
> > | 
> > | Note the yarrow-160 API and design doesn't allow me to wait for and
> > | reserve the output of a reseed in a multi-tasking OS -- /dev/random
> > | does.
> 
> Hmm. Most convincing argument I have heard so far. How much of a
> practical difference does that make, though. With ultra-conservative
> entropy estimation (eg, I am stirring in nanotime(9), but not making
> any randomness estimates from it, so the device is getting some "free"
> entropy.)?

The quality of the de-skewing function, conservative assumptions about
distribution of entropy across samples and conservativeness of the
entropy estimates don't help.  It's the yarrow output function which
blows it.

The solution as I see it is to modify yarrow to bypass the yarrow
output function and grab raw de-skewing function output for
/dev/random output.  You'd also want to do what John Kelsey was
suggesting and XOR the bypassed de-skewing function output with
/dev/urandom output as an additional safety measure.

But let's get this put in yarrow-160-a, rather than making our own
variant -- then we can say we are using stock yarrow, and other yarrow
users benefit.

> Folk who are generating OTP's for anything other than personal use
> would be insane to use anything other than custom hardware such as
> the ubiquitous geiger-counter or zener noise generator.

Even those hardware devices have biases and rely on software
de-skewing functions.  Provided we can do a good job of de-skewing,
entropy estimation and make good assumptions about entropy
distribution I see no inherent reason why we can't generate OTP
quality randomness from generic PC hardware.  There is real entropy in
that mouse swirl and keyboard input.

Adam


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200008262032.PAA05849>