From owner-freebsd-fs@FreeBSD.ORG Wed Aug 8 08:21:19 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 21DA81065670 for ; Wed, 8 Aug 2012 08:21:19 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id A10BB8FC12 for ; Wed, 8 Aug 2012 08:21:18 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1Sz1GP-0004ch-VH for freebsd-fs@freebsd.org; Wed, 08 Aug 2012 10:05:10 +0200 Received: from [81.21.138.17] (helo=ronaldradial.versatec.local) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1Sz1GQ-0000FC-BY for freebsd-fs@freebsd.org; Wed, 08 Aug 2012 10:05:10 +0200 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org References: <201208080424.q784OEfY051025@gw.catspoiler.org> Date: Wed, 08 Aug 2012 10:05:10 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <201208080424.q784OEfY051025@gw.catspoiler.org> User-Agent: Opera Mail/12.01 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.0 X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=disabled version=3.2.5 X-Scan-Signature: 5a1627636b35b65657045ef62631cd80 Subject: Re: ZFS questions X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Aug 2012 08:21:19 -0000 On Wed, 08 Aug 2012 06:24:14 +0200, Don Lewis wrote: > I've got a couple of questions about a raidz array that I'm putting > together. Capacity is more important to me than speed, but I don't want > to do anything too stupid. > > The fine manual says that using whole disks is preferable to using > slices because drive write-caching is enabled if the entire drive is > dedicated to ZFS, which would break things if the drive also contained a > UFS slice. Does this really matter much if NCQ is available? Each > drive will be dedicated to ZFS, but I'm planning on using GPT to > slightly undersize the the ZFS slice on each drive to avoid any > potential issues of installing replacement drives that are slightly > smaller than the original drives. Solaris does/did this. FreeBSD does not disable the write-cache if you don't use the whole disk. > I'm slowly accumulating the drives over time for both budgetary reasons > and to also try to reduce the chances of multiple near-simultaneous > failures of drives from the same manufacturing batch. I'd like to get > the array up and running before I have all the drives, but unfortunately > ZFS doesn't allow new drives to be added to an existing raidz vdev to > increase its capacity. I do have some smaller drives and I was thinking > about pairing those up with gconcat or gstripe and configuring the ZFS > pool with the concatenated/striped pairs. I know this isn't > recommended, but it seems to me like zpool create would accept this. > What concerns me is happens on reboot when ZFS goes searching for all of > the components of its pool. Will it stumble across its metadata on the > first of the two concatenated pairs and try to add that individual drive > to the pool instead of the pair? I don't know. Somebody else might answer this. Ronald.