From owner-freebsd-fs@freebsd.org Mon Jan 11 12:08:14 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 65907A6CE76 for ; Mon, 11 Jan 2016 12:08:14 +0000 (UTC) (envelope-from matt.churchyard@userve.net) Received: from smtp-outbound.userve.net (smtp-outbound.userve.net [217.196.1.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.userve.net", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EFD8C1291 for ; Mon, 11 Jan 2016 12:08:13 +0000 (UTC) (envelope-from matt.churchyard@userve.net) Received: from owa.usd-group.com (owa.usd-group.com [217.196.1.2]) by smtp-outbound.userve.net (8.15.1/8.15.1) with ESMTPS id u0BC7ePU091874 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=FAIL); Mon, 11 Jan 2016 12:07:40 GMT (envelope-from matt.churchyard@userve.net) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=userve.net; s=201508; t=1452514065; bh=Y8D/HI9cxty1CeO/dbSdUrONvnoIvuL1er/xmp1xeKA=; h=From:To:CC:Subject:Date:References:In-Reply-To; b=av8inmVeE/9dby1hJgmC6CALPRjkG38Z02FtMud78xxhlXhhUExo0D7CsYPXuQpzg 9m2cblAEI34hHvgKWpkcJ6KPZGXC1+dmRpoO7333akok8BrvKCKpWt921vmcBsD/Zt l/wnDy4fJE9HhKa9rQh8ggjeDg9tI2o3uNtwEc+w= Received: from SERVER.ad.usd-group.com (192.168.0.1) by SERVER.ad.usd-group.com (192.168.0.1) with Microsoft SMTP Server (TLS) id 15.0.847.32; Mon, 11 Jan 2016 12:07:34 +0000 Received: from SERVER.ad.usd-group.com ([fe80::b19d:892a:6fc7:1c9]) by SERVER.ad.usd-group.com ([fe80::b19d:892a:6fc7:1c9%12]) with mapi id 15.00.0847.030; Mon, 11 Jan 2016 12:07:34 +0000 From: Matt Churchyard To: Octavian Hornoiu CC: freebsd-fs Subject: RE: Question on gmirror and zfs fs behavior in unusual setup Thread-Topic: Question on gmirror and zfs fs behavior in unusual setup Thread-Index: AQHRTGOVYc8FhRGcRU28LzQfm5XmY572NkMQ Date: Mon, 11 Jan 2016 12:07:34 +0000 Message-ID: <9522d5cccd704b8fbe6cfe00d3bbd51a@SERVER.ad.usd-group.com> References: In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.0.10] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Jan 2016 12:08:14 -0000 >I currently have several storage servers. For historical reasons they have= 6x 1TB Western Digital Black SATA drives in each server. Configuration is = >as follows: >GPT disk config with boot sector >/dev/ada0p1 freebsd-boot 64k >/dev/ada0p2 freebsd-swap 1G >/dev/ada0p3 freebsd-ufs 30G >/dev/ada0p4 freebsd-zfs rest of drive >The drive names are ada0 through ada5. >The six drives all have the same partition scheme. >- They are all bootable >- Each swap has a label from swap0 through swap5 which all mount on boot >- The UFS partitions are all in mirror/rootfs mirrored using gmirror in a = 6 way mirror (The goal of the boot and mirror redundancy is any drive can >= die and I can still boot off any other drive like nothing happened. This pa= rtition contains the entire OS. >- The zfs partitions are in RAIDZ-2 configuration and are redundant automa= tically. They contain the network accessible storage data. >My dilemma is this. I am upgrading to 5 TB Western Digital Black drives. I= have replaced drive ada5 as a test. I used the -a 4k command while >partit= ioning to make sure sector alignment is correct. There are two major >changes: >- ada5p3 is now 100 G >- ada5p4 is now much larger due to the size of the drive >My understanding is that zfs will automatically change the total volume si= ze once all drives are upgraded to the new 5 TB drives. Please correct >me = if I'm wrong! The resilver went without a hitch. You may have to run "zpool online -e pool" once all the disk have been repl= aced, but yes it should be fairly easy to get ZFS to pick up the new space. The only other issue you may see is that if you built the original pool wit= h 512b sectors (ashift 9) you may find "zpool status" start complaining tha= t you are configured for 512b sectors when your disks are 4k (I haven't che= cked but considering the size I expect those 5TB disks are 4k). If that hap= pens you either have to live with the warning or rebuild the pool. >My concern is with gmirror. Will gmirror grow to fit the new 100 G size au= tomatically once the last drive is replaced? I got no errors using insert >= with the 100 G partition into the mix with the other 5 30 G partitions. It = synchronized fine. The volume shows as complete and all providers are >heal= thy. A quick test suggests you'll need to run "gmirror resize provider" once all= the disks are replaced to get gmirror to update the size stored in the met= adata -=20 # gmirror list Geom name: test State: COMPLETE Components: 2 ... Providers: 1. Name: mirror/test Mediasize: 104857088 (100M) Sectorsize: 512 Mode: r0w0e0 Consumers: 1. Name: md0 Mediasize: 209715200 (200M) ... # gmirror resize test # gmirror list ... Providers: 1. Name: mirror/test Mediasize: 209714688 (200M) Sectorsize: 512 Mode: r0w0e0 ... You will then need to expand the filesystem to fill the space using growfs.= Never done this but it should be a fairly straight forward process from wh= at I can see, although it seems resizing while mounted only works on 10.0+ >Anyone with knowledge of gmirror and zfs replication able to confirm that = they'll grow automatically once all 6 drives are replaced or do I have >to = sync them at existing size and do some growfs trick later? >Thanks!