Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Jan 2016 13:16:11 +0100
From:      Miroslav Lachman <000.fbsd@quip.cz>
To:        Matt Churchyard <matt.churchyard@userve.net>, Octavian Hornoiu <octavianh@gmail.com>
Cc:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: Question on gmirror and zfs fs behavior in unusual setup
Message-ID:  <56939D0B.6010509@quip.cz>
In-Reply-To: <9522d5cccd704b8fbe6cfe00d3bbd51a@SERVER.ad.usd-group.com>
References:  <CAJ=a7VPrBBqoO44zpcO4Tjz8Ep1kkTbqDxR45c2DEpH1pSvGBw@mail.gmail.com> <CAJ=a7VPaTSpYPoPcNCj1hSSQ0C2_F_pjKijA4mtLv9nj9Lb6Gw@mail.gmail.com> <9522d5cccd704b8fbe6cfe00d3bbd51a@SERVER.ad.usd-group.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Matt Churchyard via freebsd-fs wrote on 01/11/2016 13:07:
>> I currently have several storage servers. For historical reasons they have 6x 1TB Western Digital Black SATA drives in each server. Configuration is >as follows:
>
>> GPT disk config with boot sector
>> /dev/ada0p1 freebsd-boot 64k
>> /dev/ada0p2 freebsd-swap 1G
>> /dev/ada0p3 freebsd-ufs 30G
>> /dev/ada0p4 freebsd-zfs rest of drive
>
>> The drive names are ada0 through ada5.
>
>> The six drives all have the same partition scheme.
>> - They are all bootable
>> - Each swap has a label from swap0 through swap5 which all mount on boot
>> - The UFS partitions are all in mirror/rootfs mirrored using gmirror in a 6 way mirror (The goal of the boot and mirror redundancy is any drive can >die and I can still boot off any other drive like nothing happened. This partition contains the entire OS.
>> - The zfs partitions are in RAIDZ-2 configuration and are redundant automatically. They contain the network accessible storage data.
>
>> My dilemma is this. I am upgrading to 5 TB Western Digital Black drives. I have replaced drive ada5 as a test. I used the -a 4k command while >partitioning to make sure sector alignment is correct. There are two major
>> changes:
>
>> - ada5p3 is now 100 G
>> - ada5p4 is now much larger due to the size of the drive
>
>> My understanding is that zfs will automatically change the total volume size once all drives are upgraded to the new 5 TB drives. Please correct >me if I'm wrong! The resilver went without a hitch.
>
> You may have to run "zpool online -e pool" once all the disk have been replaced, but yes it should be fairly easy to get ZFS to pick up the new space.
>
> The only other issue you may see is that if you built the original pool with 512b sectors (ashift 9) you may find "zpool status" start complaining that you are configured for 512b sectors when your disks are 4k (I haven't checked but considering the size I expect those 5TB disks are 4k). If that happens you either have to live with the warning or rebuild the pool.
>
>> My concern is with gmirror. Will gmirror grow to fit the new 100 G size automatically once the last drive is replaced? I got no errors using insert >with the 100 G partition into the mix with the other 5 30 G partitions. It synchronized fine. The volume shows as complete and all providers are >healthy.
>
> A quick test suggests you'll need to run "gmirror resize provider" once all the disks are replaced to get gmirror to update the size stored in the metadata -

Good point. I didn't know about "gmirror resize". It was not in FreeBSD 
8.4 - the last time I play with replacing by bigger disks.

Thank you

Miroslav Lachman



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56939D0B.6010509>