From owner-freebsd-questions@FreeBSD.ORG Fri Dec 21 14:07:05 2012 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 0074280D for ; Fri, 21 Dec 2012 14:07:04 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-vb0-f48.google.com (mail-vb0-f48.google.com [209.85.212.48]) by mx1.freebsd.org (Postfix) with ESMTP id 96F9B8FC0C for ; Fri, 21 Dec 2012 14:07:04 +0000 (UTC) Received: by mail-vb0-f48.google.com with SMTP id fc21so5044978vbb.7 for ; Fri, 21 Dec 2012 06:07:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=6vD9ScJy3jERzIUxwGaTe0vYLjmD3q/ZrwnrcGdgnzM=; b=JdQpAhcZsp6Z68PykUzaWSNwBVFA4j9lzr5iejElP1EO+hp4aolTAMApSVhzL/KmdQ t2g0tqNy2qGynKJA8Sy6XkTZdl8svUUV1BJLciclQRgUaNPWYwStSgyXsbzKJTizIS4/ 4FR73icJaJgynrT+ijNU/YcrDNUY67EFh5z3VsoLNw5CmLXhRD2Q/dd4gJ6i2k4wFCmH d+ZLsqx9aT3YThZTTWcMukF5trRP7O5YDRVE/gpXecfjal3av2d8NboQ1Oa7W3hQbMKM YTGyFeYsGc+e9Mgoh7h8BWqv2HK2VzVAjhZhAEmrrNywbFTrYpLhdBBGKOIlOaCuUKcX FOzg== X-Received: by 10.220.152.204 with SMTP id h12mr19801693vcw.66.1356098823505; Fri, 21 Dec 2012 06:07:03 -0800 (PST) Received: from mini1.kraus-haus.org ([96.236.21.119]) by mx.google.com with ESMTPS id ko19sm9252917veb.1.2012.12.21.06.07.01 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 21 Dec 2012 06:07:02 -0800 (PST) Subject: ZFS info WAS: new backup server file system options Mime-Version: 1.0 (Apple Message framework v1085) Content-Type: text/plain; charset=us-ascii From: Paul Kraus In-Reply-To: Date: Fri, 21 Dec 2012 09:06:59 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <282CDB05-5607-4315-8F37-3EEC289E83F5@kraus-haus.org> References: To: freebsd-questions@freebsd.org X-Mailer: Apple Mail (2.1085) X-Gm-Message-State: ALoCoQkadsXEn/0y0f92QBYIk0uWd1nh9kXBtIdIEsdd3M/SSSivsr+rEIwOyO6nPIUppcQ6l0cr X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Dec 2012 14:07:05 -0000 On Dec 21, 2012, at 7:49 AM, yudi v wrote: > I am building a new freebsd fileserver to use for backups, will be = using 2 > disk raid mirroring in a HP microserver n40l. > I have gone through some of the documentation and would like to know = what > file systems to choose. >=20 > According to the docs, ufs is suggested for the system partitions but > someone on the freebsd irc channel suggested using zfs for the rootfs = as > well >=20 > Are there any disadvantages of using zfs for the whole system rather = than > going with ufs for the system files and zfs for the user data? First a disclaimer, I have been working with Solaris since 1995 = and managed lots of data under ZFS, I have only been working with = FreeBSD for about the past 6 months. UFS is clearly very stable and solid, but to get redundancy you = need to use a separate "volume manager". ZFS is a completely different way of thinking about managing = storage (not just a filesystem). I prefer ZFS for a number of reasons: 1) End to end data integrity through checksums. With the advent of 1 TB = plus drives, the uncorrectable error rate (typically 10^-14 or 10^-15) = means that over the life of any drive you *are* now likely to run into = uncorrectable errors. This means that traditional volume managers (which = rely on the drive reporting an bad reads and writes) cannot detect these = errors and bad data will be returned to the application. 2) Simplicity of management. Since the volume management and filesystem = layers have been combined, you don't have to manage each separately. 3) Flexibility of storage. Once you build a zpool, the filesystems that = reside on it share the storage of the entire zpool. This means you don't = have to decide how much space to commit to a given filesystem at = creation. It also means that all the filesystems residing in that one = zpool share the performance of all the drives in that zpool. 4) Specific to booting off of a ZFS, if you move drives around (as I = tend to do in at least one of my lab systems) the bootloader can still = find the root filesystem under ZFS as it refers to it by zfs device = name, not physical drive device name. Yes, you can tell the bootloader = where to find root if you move it, but zfs does that automatically. 5) Zero performance penalty snapshots. The only cost to snapshots is the = space necessary to hold the data. I have managed systems with over = 100,000 snapshots. I am running two production, one lab, and a bunch of VBox VMs = all with ZFS. The only issue I have seen is one I have also seen under = Solaris with ZFS. Certain kinds of hardware layer faults will cause the = zfs management tools (the zpool and zfs commands) to hang waiting on a = blocking I/O that will never return. The data continuos to be available, = you just can't manage the zfs infrastructure until the device issues are = cleared. For example, if you remove a USB drive that hosts a mounted = ZFS, then any attempt to manage that ZFS device will hang (zpool export = -f hangs until a reboot). Previously I had been running (at home) a fileserver under = OpenSolaris using ZFS and it saved my data when I had multiple drive = failures. At a certain client we had a 45 TB configuration built on top = of 120 750GB drives. We had multiple redundancy and could survive a = complete failure of 2 of the 5 disk enclosures (yes, we tested this in = pre-production). There are a number of good writeups on how setup a FreeBSD = system to boot off of ZFS, I like this one the best = http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE , but I do the = zpool/zfs configuration slightly differently (based on some hard learned = lessons on Solaris). I am writing up my configuration (and why I do it = this way), but it is not ready yet. Make sure you look at all the information here: = http://wiki.freebsd.org/ZFS , keeping in mind that lots of it was = written before FreeBSD 9. I would NOT use ZFS, especially for booting, = prior to release 9 of FreeBSD. Some of the reason for this is the bugs = that were fixed in zpool version 28 (included in release 9). -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company