Date: Tue, 09 Dec 2008 02:22:42 -0600 From: "James R. Van Artsdalen" <james-freebsd-fs2@jrv.org> To: Bryan Alves <bryanalves@gmail.com> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS resize disk vdev Message-ID: <493E2AD2.8070704@jrv.org> In-Reply-To: <92f477740812082155y3365bec7v5574206dd1a98e26@mail.gmail.com> References: <92f477740812082155y3365bec7v5574206dd1a98e26@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Bryan Alves wrote: > I'm thinking about using a hardware raid array with ZFS, using a single disk > vdev zpool. I want the ability to add/remove disks to an array, and I'm > still unsure of the stability of zfs as a whole. I'm looking for an easy > way to resize and manage disks that are greater than 2 terabytes. > > If I have a single block device, /dev/da0, on my system that is represented > by a zfs disk vdev, and the size of this block device grows (because the > underlying hardware raid expands), will zfs correctly expand? And will it > correctly expand in place? > I see no benefit to using hardware RAID for a vdev. If there is any concern over ZFS stability then you're using a filesystem you suspect on an - at best - really reliable disk: not a step forward! I think best practice is to configure the disk controller to present the disks as JBOD and let ZFS handle things: avoid fancy hardware RAID controllers altogether and use the fastest JBOD controller configuration available. Using a hardware RAID seems likely to hurt performance since the hardware RAID must issue extra reads for partial parity-stripe updates: ZFS never does in-place disk writes and rarely if ever does partial parity-stripe updates. Block allocation will suffer since the filesystem allocator can't know the geometry of the underlying storage array when laying out a file. Parity rebuilds ("resilvering") can be much faster in ZFS since only things that are different need to be recomputed when a disk is reattached to a redundant vdev (and if a disk is replaced free space need not have parity computed). And hardware RAID just adds another layer of processing to slow things down. I'm not sure how ZFS reacts to an existing disk drive suddenly becoming larger. Real disk drives don't do that and ZFS is intended to use real disks. There are some uberblocks (pool superblocks) at the end of the disk and ZFS probably won't be able to find them if the uberblocks at the front of the disk are clobbered and the "end of the disk" has moved out away from the remaining uberblocks. You can replace all of the members of a redundant vdev one-by-one with larger disks and increase the storage capacity of that vdev and hence the pool. I routinely run zpools of 4TB and 5TB, which isn't even warming up for some people. Sun has had customers with ZFS pools in the petabytes. "disks that are greater than 2 terabytes" are pocket change.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?493E2AD2.8070704>