From owner-freebsd-fs@FreeBSD.ORG Wed Aug 8 04:41:50 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1EAA2106566B for ; Wed, 8 Aug 2012 04:41:50 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from gw.catspoiler.org (gw.catspoiler.org [75.1.14.242]) by mx1.freebsd.org (Postfix) with ESMTP id F3C518FC0A for ; Wed, 8 Aug 2012 04:41:49 +0000 (UTC) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.13.3/8.13.3) with ESMTP id q784OEfY051025 for ; Tue, 7 Aug 2012 21:24:18 -0700 (PDT) (envelope-from truckman@FreeBSD.org) Message-Id: <201208080424.q784OEfY051025@gw.catspoiler.org> Date: Tue, 7 Aug 2012 21:24:14 -0700 (PDT) From: Don Lewis To: freebsd-fs@FreeBSD.org MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii Cc: Subject: ZFS questions X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Aug 2012 04:41:50 -0000 I've got a couple of questions about a raidz array that I'm putting together. Capacity is more important to me than speed, but I don't want to do anything too stupid. The fine manual says that using whole disks is preferable to using slices because drive write-caching is enabled if the entire drive is dedicated to ZFS, which would break things if the drive also contained a UFS slice. Does this really matter much if NCQ is available? Each drive will be dedicated to ZFS, but I'm planning on using GPT to slightly undersize the the ZFS slice on each drive to avoid any potential issues of installing replacement drives that are slightly smaller than the original drives. I'm slowly accumulating the drives over time for both budgetary reasons and to also try to reduce the chances of multiple near-simultaneous failures of drives from the same manufacturing batch. I'd like to get the array up and running before I have all the drives, but unfortunately ZFS doesn't allow new drives to be added to an existing raidz vdev to increase its capacity. I do have some smaller drives and I was thinking about pairing those up with gconcat or gstripe and configuring the ZFS pool with the concatenated/striped pairs. I know this isn't recommended, but it seems to me like zpool create would accept this. What concerns me is happens on reboot when ZFS goes searching for all of the components of its pool. Will it stumble across its metadata on the first of the two concatenated pairs and try to add that individual drive to the pool instead of the pair?