Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 15 Sep 2012 11:36:49 +0100
From:      Mark Murray <markm@FreeBSD.org>
To:        Ben Laurie <benl@freebsd.org>
Cc:        Arthur Mesh <arthurmesh@gmail.com>, Ian Lepore <freebsd@damnhippie.dyndns.org>, Doug Barton <dougb@freebsd.org>, freebsd-security@freebsd.org, RW <rwmaillists@googlemail.com>, "Bjoern A. Zeeb" <bz@freebsd.org>
Subject:   Re: svn commit: r239569 - head/etc/rc.d
Message-ID:  <E1TCpk1-000N2H-Vq@groundzero.grondar.org>
In-Reply-To: <CAG5KPzzFO1H5Wcx34oXi09=aJqg5w%2BXWSd8fnn0Byvpy_8%2B-rA@mail.gmail.com>
References:  <50453686.9090100@FreeBSD.org> <20120911082309.GD72584@dragon.NUXI.org> <504F0687.7020309@FreeBSD.org> <201209121628.18088.jhb@freebsd.org> <5050F477.8060409@FreeBSD.org> <20120912213141.GI14077@x96.org> <20120913052431.GA15052@dragon.NUXI.org> <alpine.BSF.2.00.1209131258210.13080@ai.fobar.qr> <alpine.BSF.2.00.1209141336170.13080@ai.fobar.qr> <E1TCXN0-000NFT-7I@groundzero.grondar.org> <CAG5KPzwOdCkybj3D5uic1KC-pwW-pewgsrqrXg60f5SJjtzYPw@mail.gmail.com> <E1TCbDG-0002Hz-9D@groundzero.grondar.org> <CAG5KPzzRxzVX-%2B9fYjRdqjY-wScbM6AA7GYtLmktgMG0Zg8iyQ@mail.gmail.com> <E1TCbSz-0007CJ-BI@groundzero.grondar.org> <CAG5KPzyJNmXRfxtPPrdc2zVCsxGtDfJT79YC3a1PNUfOOSzt8A@mail.gmail.com> <E1TCcIq-000Brr-Ex@groundzero.grondar.org> <CAG5KPzwEESg7iUb2%2B-kAN%2Bk55M95BZjh5VaSvxzSsSCVuZ9kMw@mail.gmail.com> <E1TCdlD-000C1N-4g@groundzero.grondar.org> <CAG5KPzzFO1H5Wcx34oXi09=aJqg5w%2BXWSd8fnn0Byvpy_8%2B-rA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Ben Laurie writes:
> > I can certainly trigger a reseed at will, but allowing external writes
> > to overwhelm the system by doing a
> >
> > $ cat /dev/zero > /dev/random
> >
> > ... just ain't gonna happen. No, sir.
> 
> Let's just quantify the risk here: essentially the problem is that if
> I feed something with no entropy into the pool and that is allowed
> to trigger a reseed, then you end up hashing what existing entropy
> you have with the no-entropy input, leading to a loss of entropy. The
> theoretical loss for a perfect hash function is log_2(N)log_2(1/e),
> where N is the number of iterations. log_2(1/e) is ~.66. So, to reduce
> the entropy from, say, 256 bits, if SHA-1 is used, to 128 bits, takes
> ~2^(128/.66) reseeds - that is, ~2^193. Around 10^58. So, you're
> right, it ain't gonna happen, even if you allow an attacker to reseed
> as often as he wants :-)

Fine, but that is not what I'm talking about, _AT_ALL_.

Reseeds are expensive in kernel space; locking/unlocking and thread
consumption are the issue. Right now, this is mitigated by reseeding at
10Hz. To allow reseeds to overwhelm the running kernel by pumping data
into /dev/random is would be very silly indeed, and I'm not going to let
that happen.

> I do want to see the method :-)

This is what I have so far; written, but neither tested nor
finalised. Its not the complete picture; there are minor changes
elsewhere.

The intention is to reduce the number of calls to
random_harvest_internal(). All entropy supplied by this method
is assumed to be junk/hostile; anything that is supplied that
is not so is a free gift. The TSC register is incorporated in
random_harvest_internal(), so extras are added in to help out.

random_yarrow_write(void *buf, int count)
{
        /* This static buffer is uninitialised; this is deliberate. */
        static uint8_t chunk[HARVESTSIZE];
        static int chunk_pos = 0;
        union {
                uint64_t u64;
                uint8_t u8[sizeof(uint64_t)];
        } fastcounter;
        int i;
        uint8_t *inbuf;

        /*
         * Accumulate the input into a HARVESTSIZE chunk. The writer has too
         * much control here, so "estimate" the entropy as zero.
         */
        if (buf != NULL) {
                inbuf = buf;
                for (i = 0; i < count; i++) {
                        chunk[chunk_pos] ^= inbuf[i];
                        chunk_pos = (chunk_pos + 1)%HARVESTSIZE;
                }
                fastcounter.u64 = get_cyclecount();
                for (i = 0; i < sizeof(uint64_t); i++) {
                        chunk[chunk_pos] ^= fastcounter.u8[i];
                        chunk_pos = (chunk_pos + 1)%HARVESTSIZE;
                }
        }
        else
                random_harvest_internal(get_cyclecount(), chunk, HARVESTSIZE,
                    0, 0, RANDOM_WRITE);
}

> My point here is that you don't have full control of the inputs
> to /dev/random, so assuming that they take some particular form
> seems like a mistake to me - the aim, I would hope, would be to
> extract available entropy from whatever inputs you get, regardless of
> quality.  So, the argument against xor is that it is possible for a
> careless/naive person to shoot themselves in the foot, and it would
> be nice to avoid that - it seems unkind to assume that everyone who
> wants to help the PRNG is going to be knowledgeable about its inner
> workings.

This conversation is being reset back 12+ years. *SIGH*. I get the
distinct impression that I'm starting again from scratch here, and I'm
not sure that I have either the energy or inclination to do that.

Are you aware of Yarrow's approach to poor entropy while harvesting?

M
--
Mark R V Murray
Cert APS(Open) Dip Phys(Open) BSc Open(Open) BSc(Hons)(Open)
Pi: 132511160




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E1TCpk1-000N2H-Vq>