From owner-freebsd-fs@FreeBSD.ORG Thu Aug 9 02:47:48 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EE5A2106566B for ; Thu, 9 Aug 2012 02:47:48 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from gw.catspoiler.org (gw.catspoiler.org [75.1.14.242]) by mx1.freebsd.org (Postfix) with ESMTP id B0F908FC12 for ; Thu, 9 Aug 2012 02:47:48 +0000 (UTC) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.13.3/8.13.3) with ESMTP id q792ldPo053970; Wed, 8 Aug 2012 19:47:43 -0700 (PDT) (envelope-from truckman@FreeBSD.org) Message-Id: <201208090247.q792ldPo053970@gw.catspoiler.org> Date: Wed, 8 Aug 2012 19:47:39 -0700 (PDT) From: Don Lewis To: break19@gmail.com In-Reply-To: <20120808080829.534e6e16.break19@gmail.com> MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii Cc: freebsd-fs@FreeBSD.org Subject: Re: ZFS questions X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Aug 2012 02:47:49 -0000 On 8 Aug, Chuck Burns wrote: > On Wed, 08 Aug 2012 10:05:10 +0200 > "Ronald Klop" wrote: > >> > I'm slowly accumulating the drives over time for both budgetary >> > reasons and to also try to reduce the chances of multiple >> > near-simultaneous failures of drives from the same manufacturing >> > batch. I'd like to get the array up and running before I have all >> > the drives, but unfortunately ZFS doesn't allow new drives to be >> > added to an existing raidz vdev to increase its capacity. I do >> > have some smaller drives and I was thinking about pairing those up >> > with gconcat or gstripe and configuring the ZFS pool with the >> > concatenated/striped pairs. I know this isn't recommended, but it >> > seems to me like zpool create would accept this. What concerns me >> > is happens on reboot when ZFS goes searching for all of the >> > components of its pool. Will it stumble across its metadata on the >> > first of the two concatenated pairs and try to add that individual >> > drive to the pool instead of the pair? >> >> I don't know. Somebody else might answer this. > Wouldn't it work to just use 1 new, large drive, along with the rest > of the smaller drives, simply replacing them with the large drives as > you get them? the -size- of the vdevs can change.. and from my > limited testing, it seems you can upgrade space, not by adding more > drives, but by replacing existing drives in the vdev with larger ones, > then resilvering.. Yes, I'll actually be going through this stage first. The problem is that the extra space won't be available until all of the drives have been upgraded. The capacity is limited to (N-P) times the size of the smallest drive. My objective is to build an 11 drive raidz3 array using 4 TB drives. I'll start off with a mixture of 2 TB and 4 TB drives and the 2 TB drives will limit my initial capacity to ~16 TB. I've got some more 4 TB drives in a mixed JBOD array on a linux machine. As I move the data off each of those drives, I'll move them over to the ZFS array, replacing some of the 2 TB drives. Whenever I free up a 2 TB drive, I can concatenate it with one of the 2 TB drives already in the array. Once all of the 2 TB drives are paired, then my array capacity will double. Then I can start replacing pairs of 2 TB drives with 4 TB drives. This is going to take a while ... When I first put bring up the array, I'll have just enough disks to configure it and bring it online. After it is up, I'll start migrating data to it from a Linux JBOD array which contains a wide variety of drives. It's got some of the same large drives that I'm using for the ZFS array. I'll migrate the data off of those drives first and as they get freed up, I'll migrate them to the ZFS array, replacing the smaller drives in the ZFS array one at a time and resilvering. Unfortunately, the available space won't increase when I do this.