Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 16 May 2017 16:00:15 -0700
From:      Aaron <drizzt321@gmail.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: ZFS root on single SSD?
Message-ID:  <CAEsW2o9xDtD%2BK0=BsNhWgWn%2BJr1Os38Eu-6yJzO-uzAXrLfDBA@mail.gmail.com>
In-Reply-To: <20170516222456.q3wuwlthgpoup7md@ozzmosis.com>
References:  <CAEsW2o88qA_YGxHC%2B5nWsi90yJfXKkCSV7tACstK6_hLNgu4HQ@mail.gmail.com> <20170516222456.q3wuwlthgpoup7md@ozzmosis.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--Aaron

On Tue, May 16, 2017 at 3:24 PM, andrew clarke <mail@ozzmosis.com> wrote:

> On Mon 2017-05-15 22:45:19 UTC-0700, Aaron (drizzt321@gmail.com) wrote:
>
> > So, I've been running ZFS root mirror across 2 spinning disks, and I'm
> > upgrading my home server/nas and planning on running root on a spare SSD.
> > However, I'm unsure if it'd be better to run UFS as a single drive root
> > instead of ZFS, although I do love all of the ZFS features (snapshots,
> COW,
> > scrubbing, etc) and would still like to keep that for my root drive, even
> > if I'm not mirroring at all. I do notice that FreeBSD has TRIM support
> for
> > ZFS (see http://open-zfs.org/wiki/Features#TRIM_Support).
>
> ICYMI, FreeBSD also has TRIM support for UFS. See the -t flag for the
> newfs command.
>

Ah, I guess I just assumed UFS had it, I hadn't actually checked. Thanks!


>
> > So is there a good reason NOT to run ZFS root on a single drive SSD?
>
> A good question that I've often wondered about.
>
> The first reply at
>
> https://forums.freenas.org/index.php?threads/single-drive-zfs.35515/
>
> hints at metadata corruption on a pool located entirely on a single
> magnetic drive possibly leading to failure of the entire pool, and
> given the lack of easy to use repair tools for ZFS, would require a
> rebuild. I think in reality this would be quite rare though, and
> hopefully wouldn't be a huge issue anyway provided you keep regular
> backups.
>
> Using an SSD might change things a little should the drive begin to
> fail, but I get the impression modern SSDs tend to fail a bit more
> gracefully than the old ones. I've no experience here and am
> interested in any anecdata.
>
> Keep in mind you also have other options, such as splitting the drive
> into separate UFS and ZFS partitions, or creating a ZFS pool from a
> file on UFS. The latter probably has performance drawbacks, but they
> might be negated by the performance of the SSD.
>
> Regards
> Andrew
>

I think most modern SSDs have pretty good checks because of how they use
MLC/TLC NAND and how it fails. The biggest thing I can think of is a
controller/board failure, rather than suddenly having massive number of
blocks fail. However, it is a point that without copies=2 (or more) while
bit-rot/corruption would be detectable, it wouldn't be possible to
re-construct the bad blocks.

Side note, copies=2 resiliency test (
http://jrs-s.net/2016/05/09/testing-copies-equals-n-resiliency/), rather
interesting, although I probably won't be using it, at least not for an SSD.

--Aaron



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAEsW2o9xDtD%2BK0=BsNhWgWn%2BJr1Os38Eu-6yJzO-uzAXrLfDBA>