Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 21 Dec 2012 09:06:59 -0500
From:      Paul Kraus <paul@kraus-haus.org>
To:        freebsd-questions@freebsd.org
Subject:   ZFS info WAS: new backup server file system options
Message-ID:  <282CDB05-5607-4315-8F37-3EEC289E83F5@kraus-haus.org>
In-Reply-To: <CACo--muRK_pqrBhL2LLcnByTrVKfXQfFGZDn-NqpQndm3Th=RA@mail.gmail.com>
References:  <CACo--muRK_pqrBhL2LLcnByTrVKfXQfFGZDn-NqpQndm3Th=RA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Dec 21, 2012, at 7:49 AM, yudi v wrote:

> I am building a new freebsd fileserver to use for backups, will be =
using 2
> disk raid mirroring in a HP microserver n40l.
> I have gone through some of the documentation and would like to know =
what
> file systems to choose.
>=20
> According to the docs, ufs is suggested for the system partitions but
> someone on the freebsd irc channel suggested using zfs for the rootfs =
as
> well
>=20
> Are there any disadvantages of using zfs for the whole system rather =
than
> going with ufs for the system files and zfs for the user data?

	First a disclaimer, I have been working with Solaris since 1995 =
and managed lots of data under ZFS, I have only been working with =
FreeBSD for about the past 6 months.

	UFS is clearly very stable and solid, but to get redundancy you =
need to use a separate "volume manager".

	ZFS is a completely different way of thinking about managing =
storage (not just a filesystem). I prefer ZFS for a number of reasons:

1) End to end data integrity through checksums. With the advent of 1 TB =
plus drives, the uncorrectable error rate (typically  10^-14 or 10^-15) =
means that over the life of any drive you *are* now likely to run into =
uncorrectable errors. This means that traditional volume managers (which =
rely on the drive reporting an bad reads and writes) cannot detect these =
errors and bad data will be returned to the application.

2) Simplicity of management. Since the volume management and filesystem =
layers have been combined, you don't have to manage each separately.

3) Flexibility of storage. Once you build a zpool, the filesystems that =
reside on it share the storage of the entire zpool. This means you don't =
have to decide how much space to commit to a given filesystem at =
creation. It also means that all the filesystems residing in that one =
zpool share the performance of all the drives in that zpool.

4) Specific to booting off of a ZFS, if you move drives around (as I =
tend to do in at least one of my lab systems) the bootloader can still =
find the root filesystem under ZFS as it refers to it by zfs device =
name, not physical drive device name. Yes, you can tell the bootloader =
where to find root if you move it, but zfs does that automatically.

5) Zero performance penalty snapshots. The only cost to snapshots is the =
space necessary to hold the data. I have managed systems with over =
100,000 snapshots.

	I am running two production, one lab, and a bunch of VBox VMs =
all with ZFS. The only issue I have seen is one I have also seen under =
Solaris with ZFS. Certain kinds of hardware layer faults will cause the =
zfs management tools (the zpool and zfs commands) to hang waiting on a =
blocking I/O that will never return. The data continuos to be available, =
you just can't manage the zfs infrastructure until the device issues are =
cleared. For example, if you remove a USB drive that hosts a mounted =
ZFS, then any attempt to manage that ZFS device will hang (zpool export =
-f <zpool name> hangs until a reboot).

	Previously I had been running (at home) a fileserver under =
OpenSolaris using ZFS and it saved my data when I had multiple drive =
failures. At a certain client we had a 45 TB configuration built on top =
of 120 750GB drives. We had multiple redundancy and could survive a =
complete failure of 2 of the 5 disk enclosures (yes, we tested this in =
pre-production).

	There are a number of good writeups on how setup a FreeBSD =
system to boot off of ZFS, I like this one the best =
http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE , but I do the =
zpool/zfs configuration slightly differently (based on some hard learned =
lessons on Solaris). I am writing up my configuration (and why I do it =
this way), but it is not ready yet.

	Make sure you look at all the information here: =
http://wiki.freebsd.org/ZFS , keeping in mind that lots of it was =
written before FreeBSD 9. I would NOT use ZFS, especially for booting, =
prior to release 9 of FreeBSD. Some of the reason for this is the bugs =
that were fixed in zpool version 28 (included in release 9).

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?282CDB05-5607-4315-8F37-3EEC289E83F5>