From owner-freebsd-questions@FreeBSD.ORG Fri Jun 19 19:30:55 2015 Return-Path: Delivered-To: freebsd-questions@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D0515533 for ; Fri, 19 Jun 2015 19:30:55 +0000 (UTC) (envelope-from emre@gundogan.us) Received: from athena.awarent.com (unknown [IPv6:2600:3c03::f03c:91ff:feae:fc92]) by mx1.freebsd.org (Postfix) with ESMTP id AE3A590 for ; Fri, 19 Jun 2015 19:30:55 +0000 (UTC) (envelope-from emre@gundogan.us) Received: from localhost (localhost [127.0.0.1]) by athena.awarent.com (Postfix) with ESMTP id 0D9B0807E for ; Fri, 19 Jun 2015 15:30:49 -0400 (EDT) Received: from athena.awarent.com ([127.0.0.1]) by localhost (athena.awarent.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id E5CsyPEI8Bos for ; Fri, 19 Jun 2015 15:30:46 -0400 (EDT) Message-ID: <55846DEB.6040905@gundogan.us> Date: Fri, 19 Jun 2015 15:30:51 -0400 From: Emre Gundogan MIME-Version: 1.0 To: freebsd-questions@freebsd.org Subject: Re: Expanding zfs+geli pool References: <55843CF1.1070709@gundogan.us> In-Reply-To: <55843CF1.1070709@gundogan.us> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2015 19:30:55 -0000 Thanks a lot, Matthew.I was able to replace two of the disks (disk2 and disk4) with that method successfully, so I had the pool in the following state: pool: mirror1: disk1 (2TB) disk2 (4TB) mirror2 disk3 (2TB) disk4 (4TB) Then, I scrubbed the pool (no errors) and replaced 'disk1'. That's when the whole pool became 'UNAVAIL' when I brought the machine back up. In fact regardless of which of the remaining 2TB disks I replaced (disk1 or disk3) at that point resulted in pool becoming 'UNAVAIL'. I wish I could tell why, maybe, it's got something to do with the GELI layer, or the fact that the new disks are 4K advanced format while the old ones are 512-byte logical/physical (although mixed vdevs should be OK from what I understand), the pool was correctly marked as ashift=12 based on the output from zdb. Given that I can only afford a limited downtime on this pool, I went to Plan-B to 'zfs send' the pool snapshot to another machine, re-create the pool with the 4TB disks, and 'zfs receive' from the backup machine.So I am back to the state I mentioned above (half 2TB, half 4TB), the pool is 'ONLINE', and the 'zfs send' is going on for few hours... Thanks again, Emre. > So long as you ensure that only one drive out of each vdev is replaced > at a time, and that resilvering has completed before you replace the > other one, then, subject to those constraints, the ordering doesn't > matter. The order you describe would work fine. > > Ideally, yes, adding the new drives to the mirror before removing the > old ones would be better, but as your hardware doesn't support that, > you're going to have to accept a period of lower resilience while all of > the resilvering goes on. > > Yes, making each mirrored vdev contain on new and one older drive would > be sensible. > > Check the setting of the 'autoexpand' property on the zpool before you > begin. I your shoes I'd set autoexpand=off and then issue an explicit > > zpool online -e pool disk1 disk2 disk3 disk4 > > after all the resilvering is done so that you're in control of when the > expanded space becomes available.