Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 9 Dec 2008 11:04:03 -0500
From:      "Bryan Alves" <bryanalves@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS resize disk vdev
Message-ID:  <92f477740812090804k102dcb62qcd893b3263da56a9@mail.gmail.com>
In-Reply-To: <493E2AD2.8070704@jrv.org>
References:  <92f477740812082155y3365bec7v5574206dd1a98e26@mail.gmail.com> <493E2AD2.8070704@jrv.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Dec 9, 2008 at 3:22 AM, James R. Van Artsdalen <
james-freebsd-fs2@jrv.org> wrote:

> Bryan Alves wrote:
> > I'm thinking about using a hardware raid array with ZFS, using a single
> disk
> > vdev zpool.  I want the ability to add/remove disks to an array, and I'm
> > still unsure of the stability of zfs as a whole.  I'm looking for an easy
> > way to resize and manage disks that are greater than 2 terabytes.
> >
> > If I have a single block device, /dev/da0, on my system that is
> represented
> > by a zfs disk vdev, and the size of this block device grows (because the
> > underlying hardware raid expands), will zfs correctly expand?  And will
> it
> > correctly expand in place?
> >
> I see no benefit to using hardware RAID for a vdev.  If there is any
> concern over ZFS stability then you're using a filesystem you suspect on
> an - at best  - really reliable disk: not a step forward!  I think best
> practice is to configure the disk controller to present the disks as
> JBOD and let ZFS handle things: avoid fancy hardware RAID controllers
> altogether and use the fastest JBOD controller configuration available.
>
> Using a hardware RAID seems likely to hurt performance since the
> hardware RAID must issue extra reads for partial parity-stripe updates:
> ZFS never does in-place disk writes and rarely if ever does partial
> parity-stripe updates.  Block allocation will suffer since the
> filesystem allocator can't know the geometry of the underlying storage
> array when laying out a file.  Parity rebuilds ("resilvering") can be
> much faster in ZFS since only things that are different need to be
> recomputed when a disk is reattached to a redundant vdev (and if a disk
> is replaced free space need not have parity computed).  And hardware
> RAID just adds another layer of processing to slow things down.
>
> I'm not sure how ZFS reacts to an existing disk drive suddenly becoming
> larger.  Real disk drives don't do that and ZFS is intended to use real
> disks.  There are some uberblocks (pool superblocks) at the end of the
> disk and ZFS probably won't be able to find them if the uberblocks at
> the front of the disk are clobbered and the "end of the disk" has moved
> out away from the remaining uberblocks.
>
> You can replace all of the members of a redundant vdev one-by-one with
> larger disks and increase the storage capacity of that vdev and hence
> the pool.
>
> I routinely run zpools of 4TB and 5TB, which isn't even warming up for
> some people.  Sun has had customers with ZFS pools in the petabytes.
> "disks that are greater than 2 terabytes" are pocket change.


My reason for wanting to use my hardware controller isn't for speed, it's
for the ability to migrate in place.  I'm currently using 5 750GB drives,
and I would like the flexibility to be able to purchase a 6th and grow my
array by 750GB in place.  If I could achieve something, anything, similar in
ZFS (namely, buy an amount of disks smaller than the number of total disks
in the array and see a gain in storage capacity), I would use ZFS.

If I could do something like take a zpool that exists of a raidz vdev of my
5 750GB drives, and then I go off and purchase 3 1.5TB new drives and create
a second raidz vdev and stripe them, and have the ability to remove the vdev
from the pool without data loss assuming I have enough free space, then I
would be happy.

Maybe I am insanely overthinking this, and the use case of wanting to tack
on 1 or 2 new drives isn't worth stressing out about.  I'm looking for a
middle ground between keep an array as is and replace every drive in the
array at once to see a tangible gain.

Also I'm using the hardware raid controller because it reports which drive
failed and fail LED lights up.  If I can get a physical sign of which drive
died (assuming I have per-drive lights) with ZFS, then that is a non-issue.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?92f477740812090804k102dcb62qcd893b3263da56a9>