Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Aug 2012 21:24:14 -0700 (PDT)
From:      Don Lewis <truckman@FreeBSD.org>
To:        freebsd-fs@FreeBSD.org
Subject:   ZFS questions
Message-ID:  <201208080424.q784OEfY051025@gw.catspoiler.org>

next in thread | raw e-mail | index | archive | help
I've got a couple of questions about a raidz array that I'm putting
together.  Capacity is more important to me than speed, but I don't want
to do anything too stupid.

The fine manual says that using whole disks is preferable to using
slices because drive write-caching is enabled if the entire drive is
dedicated to ZFS, which would break things if the drive also contained a
UFS slice.  Does this really matter much if NCQ is available?  Each
drive will be dedicated to ZFS, but I'm planning on using GPT to
slightly undersize the the ZFS slice on each drive to avoid any
potential issues of installing replacement drives that are slightly
smaller than the original drives.

I'm slowly accumulating the drives over time for both budgetary reasons
and to also try to reduce the chances of multiple near-simultaneous
failures of drives from the same manufacturing batch.  I'd like to get
the array up and running before I have all the drives, but unfortunately
ZFS doesn't allow new drives to be added to an existing raidz vdev to
increase its capacity.  I do have some smaller drives and I was thinking
about pairing those up with gconcat or gstripe and configuring the ZFS
pool with the concatenated/striped pairs.  I know this isn't
recommended, but it seems to me like zpool create would accept this.
What concerns me is happens on reboot when ZFS goes searching for all of
the components of its pool.  Will it stumble across its metadata on the
first of the two concatenated pairs and try to add that individual drive
to the pool instead of the pair?




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201208080424.q784OEfY051025>