Date: Mon, 11 Jan 2016 12:07:34 +0000 From: Matt Churchyard <matt.churchyard@userve.net> To: Octavian Hornoiu <octavianh@gmail.com> Cc: freebsd-fs <freebsd-fs@freebsd.org> Subject: RE: Question on gmirror and zfs fs behavior in unusual setup Message-ID: <9522d5cccd704b8fbe6cfe00d3bbd51a@SERVER.ad.usd-group.com> In-Reply-To: <CAJ=a7VPaTSpYPoPcNCj1hSSQ0C2_F_pjKijA4mtLv9nj9Lb6Gw@mail.gmail.com> References: <CAJ=a7VPrBBqoO44zpcO4Tjz8Ep1kkTbqDxR45c2DEpH1pSvGBw@mail.gmail.com> <CAJ=a7VPaTSpYPoPcNCj1hSSQ0C2_F_pjKijA4mtLv9nj9Lb6Gw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
>I currently have several storage servers. For historical reasons they have= 6x 1TB Western Digital Black SATA drives in each server. Configuration is = >as follows: >GPT disk config with boot sector >/dev/ada0p1 freebsd-boot 64k >/dev/ada0p2 freebsd-swap 1G >/dev/ada0p3 freebsd-ufs 30G >/dev/ada0p4 freebsd-zfs rest of drive >The drive names are ada0 through ada5. >The six drives all have the same partition scheme. >- They are all bootable >- Each swap has a label from swap0 through swap5 which all mount on boot >- The UFS partitions are all in mirror/rootfs mirrored using gmirror in a = 6 way mirror (The goal of the boot and mirror redundancy is any drive can >= die and I can still boot off any other drive like nothing happened. This pa= rtition contains the entire OS. >- The zfs partitions are in RAIDZ-2 configuration and are redundant automa= tically. They contain the network accessible storage data. >My dilemma is this. I am upgrading to 5 TB Western Digital Black drives. I= have replaced drive ada5 as a test. I used the -a 4k command while >partit= ioning to make sure sector alignment is correct. There are two major >changes: >- ada5p3 is now 100 G >- ada5p4 is now much larger due to the size of the drive >My understanding is that zfs will automatically change the total volume si= ze once all drives are upgraded to the new 5 TB drives. Please correct >me = if I'm wrong! The resilver went without a hitch. You may have to run "zpool online -e pool" once all the disk have been repl= aced, but yes it should be fairly easy to get ZFS to pick up the new space. The only other issue you may see is that if you built the original pool wit= h 512b sectors (ashift 9) you may find "zpool status" start complaining tha= t you are configured for 512b sectors when your disks are 4k (I haven't che= cked but considering the size I expect those 5TB disks are 4k). If that hap= pens you either have to live with the warning or rebuild the pool. >My concern is with gmirror. Will gmirror grow to fit the new 100 G size au= tomatically once the last drive is replaced? I got no errors using insert >= with the 100 G partition into the mix with the other 5 30 G partitions. It = synchronized fine. The volume shows as complete and all providers are >heal= thy. A quick test suggests you'll need to run "gmirror resize provider" once all= the disks are replaced to get gmirror to update the size stored in the met= adata -=20 # gmirror list Geom name: test State: COMPLETE Components: 2 ... Providers: 1. Name: mirror/test Mediasize: 104857088 (100M) Sectorsize: 512 Mode: r0w0e0 Consumers: 1. Name: md0 Mediasize: 209715200 (200M) ... # gmirror resize test # gmirror list ... Providers: 1. Name: mirror/test Mediasize: 209714688 (200M) Sectorsize: 512 Mode: r0w0e0 ... You will then need to expand the filesystem to fill the space using growfs.= Never done this but it should be a fairly straight forward process from wh= at I can see, although it seems resizing while mounted only works on 10.0+ >Anyone with knowledge of gmirror and zfs replication able to confirm that = they'll grow automatically once all 6 drives are replaced or do I have >to = sync them at existing size and do some growfs trick later? >Thanks!
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9522d5cccd704b8fbe6cfe00d3bbd51a>