Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 25 Jan 2017 20:20:16 -0800
From:      Octavian Hornoiu <octavianh@gmail.com>
To:        Matt Churchyard <matt.churchyard@userve.net>
Cc:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: Question on gmirror and zfs fs behavior in unusual setup
Message-ID:  <CAJ=a7VNqWSAg16Y3sfB4rf4Un7o8LCXr%2BN0CTDq49gpgYJ0h-w@mail.gmail.com>
In-Reply-To: <9522d5cccd704b8fbe6cfe00d3bbd51a@SERVER.ad.usd-group.com>
References:  <CAJ=a7VPrBBqoO44zpcO4Tjz8Ep1kkTbqDxR45c2DEpH1pSvGBw@mail.gmail.com> <CAJ=a7VPaTSpYPoPcNCj1hSSQ0C2_F_pjKijA4mtLv9nj9Lb6Gw@mail.gmail.com> <9522d5cccd704b8fbe6cfe00d3bbd51a@SERVER.ad.usd-group.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jan 11, 2016 at 4:07 AM, Matt Churchyard <matt.churchyard@userve.net
> wrote:

> >I currently have several storage servers. For historical reasons they
> have 6x 1TB Western Digital Black SATA drives in each server. Configuration
> is >as follows:
>
> >GPT disk config with boot sector
> >/dev/ada0p1 freebsd-boot 64k
> >/dev/ada0p2 freebsd-swap 1G
> >/dev/ada0p3 freebsd-ufs 30G
> >/dev/ada0p4 freebsd-zfs rest of drive
>
> >The drive names are ada0 through ada5.
>
> >The six drives all have the same partition scheme.
> >- They are all bootable
> >- Each swap has a label from swap0 through swap5 which all mount on boot
> >- The UFS partitions are all in mirror/rootfs mirrored using gmirror in a
> 6 way mirror (The goal of the boot and mirror redundancy is any drive can
> >die and I can still boot off any other drive like nothing happened. This
> partition contains the entire OS.
> >- The zfs partitions are in RAIDZ-2 configuration and are redundant
> automatically. They contain the network accessible storage data.
>
> >My dilemma is this. I am upgrading to 5 TB Western Digital Black drives.
> I have replaced drive ada5 as a test. I used the -a 4k command while
> >partitioning to make sure sector alignment is correct. There are two major
> >changes:
>
> >- ada5p3 is now 100 G
> >- ada5p4 is now much larger due to the size of the drive
>
> >My understanding is that zfs will automatically change the total volume
> size once all drives are upgraded to the new 5 TB drives. Please correct
> >me if I'm wrong! The resilver went without a hitch.
>
> You may have to run "zpool online -e pool" once all the disk have been
> replaced, but yes it should be fairly easy to get ZFS to pick up the new
> space.
>
> The only other issue you may see is that if you built the original pool
> with 512b sectors (ashift 9) you may find "zpool status" start complaining
> that you are configured for 512b sectors when your disks are 4k (I haven't
> checked but considering the size I expect those 5TB disks are 4k). If that
> happens you either have to live with the warning or rebuild the pool.
>
> >My concern is with gmirror. Will gmirror grow to fit the new 100 G size
> automatically once the last drive is replaced? I got no errors using insert
> >with the 100 G partition into the mix with the other 5 30 G partitions. It
> synchronized fine. The volume shows as complete and all providers are
> >healthy.
>
> A quick test suggests you'll need to run "gmirror resize provider" once
> all the disks are replaced to get gmirror to update the size stored in the
> metadata -
>
> # gmirror list
> Geom name: test
> State: COMPLETE
> Components: 2
> ...
> Providers:
> 1. Name: mirror/test
>    Mediasize: 104857088 (100M)
>    Sectorsize: 512
>    Mode: r0w0e0
> Consumers:
> 1. Name: md0
>    Mediasize: 209715200 (200M)
> ...
>
> # gmirror resize test
> # gmirror list
> ...
> Providers:
> 1. Name: mirror/test
>    Mediasize: 209714688 (200M)
>    Sectorsize: 512
>    Mode: r0w0e0
> ...
>
> You will then need to expand the filesystem to fill the space using
> growfs. Never done this but it should be a fairly straight forward process
> from what I can see, although it seems resizing while mounted only works on
> 10.0+
>
> >Anyone with knowledge of gmirror and zfs replication able to confirm that
> they'll grow automatically once all 6 drives are replaced or do I have >to
> sync them at existing size and do some growfs trick later?
>
> >Thanks!
>

Thanks Matt!  This advice was really great, it all worked as expected!

Unfortunately, like you suspected the pool is whining about sectors.

#zpool status
  pool: data
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: resilvered 861G in 2h59m with 0 errors on Sat Jan 21 01:33:02 2017
config:

        NAME           STATE     READ WRITE CKSUM
        data           ONLINE       0     0     0
          raidz2-0     ONLINE       0     0     0
            gpt/data0  ONLINE       0     0     0  block size: 512B
configured, 4096B native
            gpt/data1  ONLINE       0     0     0  block size: 512B
configured, 4096B native
            gpt/data2  ONLINE       0     0     0  block size: 512B
configured, 4096B native
            gpt/data3  ONLINE       0     0     0  block size: 512B
configured, 4096B native
            gpt/data4  ONLINE       0     0     0  block size: 512B
configured, 4096B native
            gpt/data5  ONLINE       0     0     0  block size: 512B
configured, 4096B native


Is my best bet to do the following:

1) create a new pool on another server
2) transfer data to other pool
3) re-create existing pool
4) transfer data back

Or are there options for changing block sizes in-place coming up at some
point in the feature set I can wait for?

Thanks!

Octavian



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ=a7VNqWSAg16Y3sfB4rf4Un7o8LCXr%2BN0CTDq49gpgYJ0h-w>