Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 9 Mar 2012 15:22:53 +0100
From:      Fabian Keil <freebsd-listen@fabiankeil.de>
To:        freebsd-stable@freebsd.org
Subject:   Re: FreeBSD root on a geli-encrypted ZFS pool
Message-ID:  <20120309152253.17a108c2@fabiankeil.de>
In-Reply-To: <BABF8C57A778F04791343E5601659908236BDA@cinip100ntsbs.irtnog.net>
References:  <BABF8C57A778F04791343E5601659908236BD9@cinip100ntsbs.irtnog.net> <20120307174850.746a6b0a@fabiankeil.de> <BABF8C57A778F04791343E5601659908236BDA@cinip100ntsbs.irtnog.net>

next in thread | previous in thread | raw e-mail | index | archive | help
--Sig_/ei3AZnsArVqjRwEiNSQsCCF
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

"xenophon\\+freebsd" <xenophon+freebsd@irtnog.org> wrote:

> > -----Original Message-----
> > From: Fabian Keil [mailto:freebsd-listen@fabiankeil.de]
> > Sent: Wednesday, March 07, 2012 11:49 AM

> > It's not clear to me why you enable geli integrity verification.
> >=20
> > Given that it is single-sector-based it seems inferior to ZFS's
> > integrity checks in every way and could actually prevent ZFS from
> > properly detecting (and depending on the pool layout correcting)
> > checksum errors itself.
>=20
> My goal in encrypting/authenticating the storage media is to prevent
> unauthorized external data access or tampering.  My assumption is that
> ZFS's integrity checks have more to do with maintaining metadata
> integrity in the event of certain hardware or software faults (e.g.,
> operating system crashes, power outages) - that is to say, ZFS cannot
> tell if an attacker boots from a live CD, imports the zpool, fiddles
> with something, and reboots, whereas GEOM_ELI can if integrity checking
> is enabled (even if someone tampers with the encrypted data).

If the ZFS pool is located on GEOM_ELI providers the attacker
shouldn't be able to import it unless the passphrase and/or
keyfile are already known.

If the attacker tampers with the encrypted data used by the pool,
ZFS should detect it, unless it's a replay attack in which case
enabling GEOM_ELI's integrity checking wouldn't have helped you
either.

If the attacker only replays a couple of blocks, ZFS's integrity
detection is likely to detect it for most blocks, while GEOM_ELI's
integrity checking will not detect it for any block.

In my opinion protecting ZFS's default checksums (which cover
non-metadata as well) with GEOM_ELI is sufficient. I don't see
what advantage additionally enabling GEOM_ELI's integrity
verification offers.

>                                                                This does
> raise an interesting question that merits further testing: What happens
> if a physical sector goes bad, whether that's due to a system bus or
> controller I/O error, a physical problem with the media itself, or
> someone actively tampering with the encrypted storage?  GEOM_ELI would
> probably return some error back to ZFS for that sector, which could
> cause the entire vdev to go offline but might just require scrubbing the
> zpool to fix.
>=20
> > I'm also wondering if you actually benchmarked the difference
> > between HMAC/MD5 and HMAC/SHA256. Unless the difference can
> > be easily measured, I'd probably stick with the recommendation.
>=20
> I based my choice of HMAC algorithm on the following forum post:
>=20
> http://forums.freebsd.org/showthread.php?t=3D12955

I'm wondering if dd's block size is correct, 4096 seems rather small.

Anyway, it's a test without file system so the ZFS overhead isn't
measured. I wasn't entirely clear about it, but my assumption was
that the ZFS overhead might be big enough to make the difference
between HMAC/MD5 and HMAC/SHA256 a lot less significant.

> I wouldn't recommend anyone use MD5 in real-world applications, either,
> so I'll update my instructions to use HMAC/SHA256 as recommended by
> geli(8).

It's still not clear to me why you recommend using a HMAC for geli at all.

> > I would also be interested in benchmarks that show that geli(8)'s
> > recommendation to increase geli's block size to 4096 bytes makes
> > sense for ZFS. Is anyone aware of any?
>=20
> As far as I know, ZFS on FreeBSD has no issues with 4k-sector drives,
> see Ivan Voras' comments here:
>=20
> http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html
>
> Double-checking my zpool shows the correct value for ashift:
>=20
>   masip205bsdfile# zdb -C tank | grep ashift
>                   ashift: 12

I'm currently using sector sizes between 512 and 8192 so I'm not
actually expecting technical problems, it's just not clear to me
how much the sector size matters and if 4096 is actually the best
value when using ZFS.

> Benchmarking different geli sector sizes would also be interesting and
> worth incorporating into these instructions.  I'll add that to my to-do
> list as well.

Great.

Fabian

--Sig_/ei3AZnsArVqjRwEiNSQsCCF
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)

iEYEARECAAYFAk9aEkEACgkQBYqIVf93VJ36YACgi9V0RW4BX9DFJvXEZHvFEuHV
fPgAoJEcvjlp6MJzpQSUqkhtSeELb6f/
=U12I
-----END PGP SIGNATURE-----

--Sig_/ei3AZnsArVqjRwEiNSQsCCF--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120309152253.17a108c2>