Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 05 Sep 2012 15:20:13 -0700
From:      Doug Barton <dougb@FreeBSD.org>
To:        obrien@freebsd.org
Cc:        Arthur Mesh <arthurmesh@gmail.com>, freebsd-security@FreeBSD.org, freebsd-rc@FreeBSD.org, Mark Murray <markm@FreeBSD.org>
Subject:   Re: svn commit: r239569 - head/etc/rc.d
Message-ID:  <5047D01D.5000802@FreeBSD.org>
In-Reply-To: <20120905203222.GA2920@dragon.NUXI.org>
References:  <201208221843.q7MIhLU4077951@svn.freebsd.org> <5043DBAF.40506@FreeBSD.org> <20120905203222.GA2920@dragon.NUXI.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 09/05/2012 13:32, David O'Brien wrote:
> On Sun, Sep 02, 2012 at 03:20:31PM -0700, Doug Barton wrote:
>> On 08/22/2012 11:43, David E. O'Brien wrote:
>>> Author: obrien
>>> Date: Wed Aug 22 18:43:21 2012
>>> New Revision: 239569
>>> URL: http://svn.freebsd.org/changeset/base/239569
>>>
>>> Log:
>>>   Remove old entropy seeding after consumption initializing /dev/random PRNG.
>>>   Not doing so opens us up to replay attacks.
>>
>> I object to this change, and would like to see it discussed more.
>>
>> When I did the original implementation of the entropy seeding scripts
>> this issue was discussed, and the decision not to remove the entropy
>> after seeding was deliberate.
> 
> Hi Doug,
> I would like to refresh my memory of this discussion.  Can you help
> me narrow down the date and mailing list such that I can go find it
> archives?  It may help me understand your POV in this thread.

I've explained my perspective as well as I can already. This has
probably occurred to you, but searching the archives during the months
prior to first commit for libexec/save-entropy is a good start.
-security, -arch, -current ... maybe -hackers. Beyond that I really
can't help, it was 12 years ago after all. :)

> I've read what I could find from Bruce Schneier on entropy seeding.
> It is my read that to not delete the seed input goes squarely against
> the yarrow inventor's recommendations.  I tried to document this in
> the commit.

Yes, I understand what you and Arthur are proposing. I also explained in
detail in one of my replies to Arthur that I agree with this in
principle, which is why the system that saves entropy files eventually
replaces them all.

My concern is for the case where the system is rebooted immediately
after (as you propose) the old files are deleted. In that case, my
understanding is that the combination of old entropy files and the new
material that is added at each boot (both by the commands run in
initrandom, and by hardware harvesting) is both better than not having
the old entropy files available, AND not subject to replay attacks.

You and Arthur are putting forward a theory that I'm wrong on both
counts, without actually demonstrating the truth of your claims. If you
think that the system as it existed before your changes is vulnerable,
it would be nice to have that demonstrated.

> Do you have access to Practical Cryptography, ISBN: 0-471-22357-3 by
> Niels Ferguson and Bruce Schneier that you could read 10.5 and 10.6
> and give your thoughts?

Not handy, no. My recollection is that not reusing static entropy files
is a best case scenario, which is handled by my first example scenario
(system runs longer than 88 minutes).

>> There are 3 possibilities. First, the
>> system boots normally, gets seeded, and runs for a period of time longer
>> than ($entropy_save_num x cron interval), which by default is 88
>> minutes. In this case all of the entropy files will be replaced, so the
>> "postrandom" change will be spurious.
> 
> I almost agree, but not quite.  My read of /usr/libexec/save-entropy
> is that it does not overwrite ${entropy_file}. 

Of course not, that's handled by /etc/rc.d/random at shutdown time. As
I've explained already, the 2 things are entirely separate in order to
minimize the writes to the root file system.

> Only that
> /usr/libexec/save-entropy saves additional seeding material.
> 
> Thus there is still a chance of replaying (reseeding with)
> ${entropy_file}.

If the system shuts down cleanly, $entropy_file will be a new one. One
of the reasons for keeping the static files in /var/db/entropy is to
account for the case of a fast reboot where the material in
$entropy_file is of low quality because the device never got an adequate
seed.

> I'm curious, where did the default value of ${entropy_save_num} of "8"
> come from?  Given we're talking about real machines and thus finite
> constraints in space, why stuff in 8 * 2k worth of seed all at the same
> time?  What is the improvement in the pseudo randomness in /dev/random
> output after that much seeding?  Why not 1/2 that value (4)?  Or why not
> 9, to maximize the amount of seeding within a single digit extension?

The number 8 was chosen pseudo-randomly (pardon the pun) as a value that
would certainly be "more than enough" without taking up too much of what
might be precious space in /var. It was made a variable in order to
allow for users who wanted more or needed less.

If you'd like to do some rigorous testing and demonstrate that a
different default is needed, I would be happy to see that.

>> In the second case, the system boots successfully, gets seeded, but runs
>> for less than the default 88 minutes. In that case there will be at
>> least (uptime / cron interval) new files, and the same number of old
>> files removed. So while some of the entropy will be "stale" at next
>> reboot, it won't all be the same, so even the stale entropy is better
>> than nothing in helping to reseed.
> 
> It seems this is a point of contention.  Arthur and I disagree.

Yes, I understand that you disagree. :)

> I believe you do not feel seeding with (uptime / cron interval) new
> [/dev/random generated] files is sufficient for a good pseudo random
> /dev/random.  Is that correct?  I believe you are questioning what is
> enough entropy seeding for obtaining a secure key?  And that it
> entropy_save_num(=8) is required or strongly recommended.
> 
> This is such a key question that Schneier even states this is probably
> the hardest problem to solve in PRNG design. 

Yup. That much I do remember very clearly, and I clearly remember that
the decision was made to try to compensate for all possible weaknesses
of a real world system, vs. a theoretically perfect one. That's why we
replace the files as soon as is practical, but keep the ones we haven't
had a chance to replace yet.

> The yarrow design has
> mechanisms it uses to answer this.  It tracks the number of bits of
> entropy fed into it to decide for itself.

Yup. :)

>> In the third case, the system boots, but is then rebooted again before
>> the cron interval has had a chance to replace even 1 file. This is the
>> case where removing the old entropy is particularly pathological. It
>> reduces the available seeding material without adding anything new. From
>> a security perspective, that's worse than the possibility of a replay
>> attack.
> 
> How is it worse?  /etc/rc.d/postrandom does add something new by
> generating a new ${entropy_file} during this time -- providing for 4k
> of good entropy seed.  Less assume only a 10% entropy rate.  That's
> still 409 bits of entropy -- which I claim is good enough to prime
> yarrow to operate securely.

And what I'd like to see is some proof of that claim.

>> For all 3 cases, it's important to keep in mind a few things. Primarily,
>> yarrow is designed to avoid exactly the kind of "replay" problem that
>> this change was intended to fix, so it's almost certainly at best
>> unnecessary.
> 
> I am not sure what section of http://www.schneier.com/paper-yarrow.ps.gz
> are you referring to.  I do not see "replay" explicitly stated.

I'm referring to the way that Yarrow uses entropy internally which is
(in part) designed to help prevent exactly the kind of replay attack
you're referring to.

> Please note that section 3.1 "How PRNGs are Compromised" in the
> paragraph titled "Mishandling of Keys and Seed Files" Schneier states:
> 
>     seed files are easy to mishandle in various ways, ..., or by
>     opening a seed file, but failing to update it every time it is used.

First, this paragraph is talking about systems that are not Yarrow.
Second (and once again) this is talking about a theoretical perfect
system. The word "reboot" doesn't appear in the paper either. :)

> That statement is in agreement with everything else I've seen from
> Schneier on this subject.
> 
> Also section 5.2 "Entropy Accumulator" in the "Security Arguments"
> paragraph:
> 
>     Consider the situation of an attacker trying to predict the whole
>     sequence of inputs to be fed into the user's entropy accumulator.
>     ...
>     Ultimately, an attacker in this position cannot be resisted
>     effectively by the design of the algorithm, ...

So once again, you have to take into account the _whole_ sequence of
inputs. By default, /dev/random starts hardware seeding the moment it is
loaded. The commands in initrandom also provide _some_ (albeit in a
non-trivial number of cases not much) entropy. And you conveniently cut
out the bit from that paper about how difficult the attacker's job is
even if they can predict every byte of input. :)

However, this does give me another idea about how we can improve the
system. rc.d/random currently slurps the files in numerical order. We
could use some bit of the data that is generated at boot time to pluck
out a "random" value for the starting point. That would make the
attacker's job harder, even with perfect knowledge of what the inputs are.

>> Of nearly equal importance it should be kept in mind that
>> we add a non-zero amount of unique material at every boot, so a true
>> replay attack isn't possible, even without this change.
> 
> What is the non-zero amount of unique material we seed at every boot?

See above.

> Most of the 'better_than_nothing' output is guessable by a local non-root
> account. 

And how are they going to be logged in at boot time? I also think you're
rather dramatically underestimating the increase in difficulty of
guessing the interior state that even a small change in the inputs
(hardware as well as initrandom) provides. Please note that this is a
different issue from the overall quality of the entropy available to the
device at, or shortly after boot time.

> The best thing saving us when the seed inputs are known, is the
> stirring in of the CPU cycle counter.

Yes, that's one of the factors that I believe helps refute your claim
that reusing the seed files after a short uptime makes is vulnerable to
replay attacks.

>> In short, this change is at best unnecessary, and possibly detrimental.
> 
> I do not see how it is either of those.  Please explain further.

I have now explained the same points repeatedly to both you and Arthur.
IMO, neither of you has chosen to thoroughly address the issues I've
raised, instead the same assertions have been repeated. I've asked
repeatedly for you(pl.) to demonstrate the truth of your claims, and
given that you(pl.) are the ones proposing a fairly dramatic change to a
security system that has served for almost 12 years, I think that the
burden of proof is on you.

Further, prior to this message I made one concrete proposal that I think
helps address your concerns, writing a new entropy file to
/var/db/entropy at boot time. I have yet to hear either of you respond
to that.

I'm also making another proposal above, "randomizing" the start point in
the list of files from /var/db/entropy, which I think unarguably
strengthens the overall system, even if only a little. I also think that
it helps to address your concerns about replay, so I'm interested in
your thoughts on that issue as well.

Doug

-- 

    I am only one, but I am one.  I cannot do everything, but I can do
    something.  And I will not let what I cannot do interfere with what
    I can do.
			-- Edward Everett Hale, (1822 - 1909)



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5047D01D.5000802>