Date: Mon, 11 Jan 2016 11:45:16 +0000 From: Steven Hartland <killing@multiplay.co.uk> To: freebsd-fs@freebsd.org Subject: Re: Question on gmirror and zfs fs behavior in unusual setup Message-ID: <569395CC.6060104@multiplay.co.uk> In-Reply-To: <CAJ=a7VPaTSpYPoPcNCj1hSSQ0C2_F_pjKijA4mtLv9nj9Lb6Gw@mail.gmail.com> References: <CAJ=a7VPrBBqoO44zpcO4Tjz8Ep1kkTbqDxR45c2DEpH1pSvGBw@mail.gmail.com> <CAJ=a7VPaTSpYPoPcNCj1hSSQ0C2_F_pjKijA4mtLv9nj9Lb6Gw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 11/01/2016 11:18, Octavian Hornoiu wrote: > I currently have several storage servers. For historical reasons they have > 6x 1TB Western Digital Black SATA drives in each server. Configuration is > as follows: > > GPT disk config with boot sector > /dev/ada0p1 freebsd-boot 64k > /dev/ada0p2 freebsd-swap 1G > /dev/ada0p3 freebsd-ufs 30G > /dev/ada0p4 freebsd-zfs rest of drive > > The drive names are ada0 through ada5. > > The six drives all have the same partition scheme. > - They are all bootable > - Each swap has a label from swap0 through swap5 which all mount on boot > - The UFS partitions are all in mirror/rootfs mirrored using gmirror in a 6 > way mirror (The goal of the boot and mirror redundancy is any drive can die > and I can still boot off any other drive like nothing happened. This > partition contains the entire OS. > - The zfs partitions are in RAIDZ-2 configuration and are redundant > automatically. They contain the network accessible storage data. > > My dilemma is this. I am upgrading to 5 TB Western Digital Black drives. I > have replaced drive ada5 as a test. I used the -a 4k command while > partitioning to make sure sector alignment is correct. There are two major > changes: > > - ada5p3 is now 100 G > - ada5p4 is now much larger due to the size of the drive > > My understanding is that zfs will automatically change the total volume > size once all drives are upgraded to the new 5 TB drives. Please correct me > if I'm wrong! The resilver went without a hitch. Correct you just need to ensure that autoexpand is enabled on the pool e.g. zpool set autoexpand=on tank > My concern is with gmirror. Will gmirror grow to fit the new 100 G size > automatically once the last drive is replaced? I got no errors using insert > with the 100 G partition into the mix with the other 5 30 G partitions. It > synchronized fine. The volume shows as complete and all providers are > healthy. I'm not sure with gmirror 100% but the following seems to detail what you want: https://lists.freebsd.org/pipermail/freebsd-questions/2007-August/156466.html > > Anyone with knowledge of gmirror and zfs replication able to confirm that > they'll grow automatically once all 6 drives are replaced or do I have to > sync them at existing size and do some growfs trick later? > > Thanks! > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?569395CC.6060104>