Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 25 Mar 2022 11:34:09 -0600
From:      Alan Somers <asomers@freebsd.org>
To:        Martin Simmons <martin@lispworks.com>
Cc:        John Doherty <bsdlists@jld3.net>, freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: mirror vdevs with different sizes
Message-ID:  <CAOtMX2hYMU%2BF9xJ1GueU3qnBefFF=KNyogJFxwDYipV0aR4cXg@mail.gmail.com>
In-Reply-To: <202203251705.22PH56du029811@higson.cam.lispworks.com>
References:  <95932839-F6F8-4DCB-AA7F-46040CFA1DE1@jld3.net> <CAOtMX2jdRT1mmDzcPbkN42ruKe6gC5QaHQZRMmHeup=C562wFA@mail.gmail.com> <202203251705.22PH56du029811@higson.cam.lispworks.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Yes, exactly.  There's nothing mysterious about large vdevs in ZFS,
it's just that a greater fraction of the OP's pool's data will be
stored on the new disks, but their performance won't likely be much
better than the old disks.
-Alan

On Fri, Mar 25, 2022 at 11:05 AM Martin Simmons <martin@lispworks.com> wrote:
>
> Is "the new disks will have a lower ratio of IOPS/TB" another way of saying
> "more of the data will be stored on the new disks, so they will be accessed
> more frequently"?  Or is this something about larger vdevs in general?
>
> __Martin
>
>
> >>>>> On Fri, 25 Mar 2022 10:09:39 -0600, Alan Somers said:
> >
> > There's nothing wrong with doing that.  The performance won't be
> > perfectly balanced, because the new disks will have a lower ratio of
> > IOPS/TB.  But that's fine.  Go ahead.
> > -Alan
> >
> > On Fri, Mar 25, 2022 at 9:17 AM John Doherty <bsdlists@jld3.net> wrote:
> > >
> > > Hello, I have an existing zpool with 12 mirrors of 8 TB disks. It is
> > > currently about 60% full and we expect to fill the remaining space
> > > fairly quickly.
> > >
> > > I would like to expand it, preferably using 12 mirrors of 16 TB disks.
> > > Any reason I shouldn't do this?
> > >
> > > Using plain files created with truncate(1) like these:
> > >
> > > [root@ibex] # ls -lh /vd/vd*
> > > -rw-r--r--  1 root  wheel   8.0G Mar 25 08:49 /vd/vd0
> > > -rw-r--r--  1 root  wheel   8.0G Mar 25 08:49 /vd/vd1
> > > -rw-r--r--  1 root  wheel    16G Mar 25 08:49 /vd/vd2
> > > -rw-r--r--  1 root  wheel    16G Mar 25 08:49 /vd/vd3
> > >
> > > I can first do this:
> > >
> > > [root@ibex] # zpool create ztest mirror /vd/vd{0,1}
> > > [root@ibex] # zpool list ztest
> > > NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
> > > HEALTH  ALTROOT
> > > ztest  7.50G   384K  7.50G        -         -     0%     0%  1.00x
> > > ONLINE  -
> > >
> > > And then do this:
> > >
> > > [root@ibex] # zpool add ztest mirror /vd/vd{2,3}
> > > [root@ibex] # zpool list ztest
> > > NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
> > > HEALTH  ALTROOT
> > > ztest    23G   528K  23.0G        -         -     0%     0%  1.00x
> > > ONLINE  -
> > >
> > > And FWIW, everything works as expected. But I've never constructed a
> > > real zpool with vdevs of different sizes and I don't know whether there
> > > might be any expected problems.
> > >
> > > I could just create a new zpool with new disks, but most of the existing
> > > data and most of the expected new data is in just two file systems and
> > > for simplicity's sake from the perspective of those users, it would be
> > > nicer to just make the existing file systems larger than to give them
> > > access to a new, different one.
> > >
> > > Any comments, suggestions, warnings, etc. much appreciated. Thanks.
> > >
> >



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2hYMU%2BF9xJ1GueU3qnBefFF=KNyogJFxwDYipV0aR4cXg>