Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 26 Aug 2000 17:33:29 +0200
From:      Mark Murray <mark@grondar.za>
To:        Adam Back <adam@cypherspace.org>
Cc:        mark@grondar.za, current@freebsd.org, kris@freebsd.org, jeroen@vangelderen.org
Subject:   Re: yarrow & /dev/random 
Message-ID:  <200008261533.e7QFXUp25804@grimreaper.grondar.za>
In-Reply-To: <200008261459.JAA05375@cypherspace.org> ; from Adam Back <adam@cypherspace.org>  "Sat, 26 Aug 2000 09:59:48 EST."
References:  <200008261459.JAA05375@cypherspace.org> 

next in thread | previous in thread | raw e-mail | index | archive | help
> You really can't use yarrow to implement /dev/random as it is.  Even
> waiting for reseeds doesn't cut it.  The issue is that everything goes
> through the yarrow output function, which restricts yarrow to offering
> computational security with at worst 2^n work factor to break because
> it offers known plaintext the 0 block as the first output is E_k( 0 ).

OK; what then? The existing MD5 based system is very attackable, and
protects itself very poorly.

My approach to this (and this is the first time I am stating this in
such a wide forum) is to provide another device (say /dev/srandom) for
folk who want to do their own randomness processing. This would provide
a structure of data including the entropy, the system nanotime and the
source, so the user can do his own hard work when he needs to, and the
folk simply needing a stream of "good" random numbers can do do from
/dev/random.

> > I am against the blocking model, as I believe that it goes against
> > what Yarrow is trying to do. If the Yarrow authors argued otherwise,
> > I'd listen.
> 
> Niels and John Kelsey were against it to initially, on the grounds
> that computational security (160 bits -- or whatever the parameter is
> with the ciphers you have plugged in) is in fact "good enough" in
> practice even for 1024 bit RSA key generation.
> 
> (The argument has some validity; in practice a brute force attack
> against RSA 1024 takes significantly less than 2^160 operations,
> though the memory requirements are higher).

I've heard that that is down to like 2^90 or a lot less...

> However it is not fair to impose that view on someone.  People can
> have legitmate reasons to need more entropy.  Another very concrete
> example is: say someone is using a yarrow-160 (3DES and SHA1)
> implementation and they want to use an AES cipher with a 256 bit key
> -- without the /dev/random API, you can't get 256 bit security, with
> it you can.

Sooner or later someone is going to come up with a requirement for
M-bit randoness on Yarrow-N, where M > N. What then?

> OTPs and some algorithms offer information theoretic security, or you
> may be using a larger key space symmetric construct than the yarrow
> output size (using 256 doesn't solve that -- then they want a 512 bit
> key).  Worse people may already be using /dev/random for these
> assumptions, so you risk breaking existing code's security by
> replacing /dev/random with yarrow.

I can't help broken applications, but if we provide a better API, and
get folk to use it, that everyone wins.

> > If I construct a specific hash function, is this still a problem?
> 
> No.  Note my other comments on list about CBC-MAC are confused -- I
> misread your code.  It appears to be a keyless hash function, and as
> Joeren noted it has some similarities to Davies-Meyer, but it's not
> quite the same for the reasons he noted.

I fixed that and made it Davies-Meyer.
http://people.freebsd.org/~markm/randomdev.patch

> The main argument is against not using constructs which haven't
> received lots of peer-review -- most crypto constructs are very
> fragile to small design changes.

Agreed.

> | So given that, it doesn't seem quite fair to pull the rug from under
> | /dev/random users and replace it with a PRNG with quite different
> | security assumptions.  Users would have similar reasons to be upset if
> | someone removed their /dev/random and symlinked it to /dev/urandom.

...unless we can somehow get /dev/random to be "secure enough".

> and after more arguments, more formally argued:
:
:
> | Even if I have a mechanism to wait for a reseed after each output and
> | reserve that output for me, I get at best R*2^160 bits for R reseeds,
> | rather than the 2^{R*160} bits I wanted.
> | 
> | Note the yarrow-160 API and design doesn't allow me to wait for and
> | reserve the output of a reseed in a multi-tasking OS -- /dev/random
> | does.

Hmm. Most convincing argument I have heard so far. How much of a
practical difference does that make, though. With ultra-conservative
entropy estimation (eg, I am stirring in nanotime(9), but not making
any randomness estimates from it, so the device is getting some "free"
entropy.)?

PC's are pretty low-entropy devices; users who need lots of random
bits (as opposed to a steady supply of random numbers) are arguably
going to need to go to extraordinary lengths to get them; their
own statistical analysis is almost certainly going to be required.

Software that I am (partially) aware of that uses /dev/random usually
doesn't trust it too much anyway (qv PGP) (OpenSSH is an exception),
and programmers often come up with elabrate schemes to compel the
os to provide their requisite bits.

Folk who are generating OTP's for anything other than personal use
would be insane to use anything other than custom hardware such as
the ubiquitous geiger-counter or zener noise generator.

M
--
Mark Murray
Join the anti-SPAM movement: http://www.cauce.org


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200008261533.e7QFXUp25804>