Date: Wed, 3 May 2023 09:08:50 -0700 From: Freddie Cash <fjwcash@gmail.com> To: FreeBSD Filesystems <freebsd-fs@freebsd.org>, FreeBSD Stable <freebsd-stable@freebsd.org> Subject: Expanding storage in a ZFS pool using draid Message-ID: <CAOjFWZ7Fcnu=MfKY57gWz1g%2BFiaTFK-6-am5Wy0afDx6X2p_xg@mail.gmail.com>
next in thread | raw e-mail | index | archive | help
--00000000000033a07205facc4601 Content-Type: text/plain; charset="UTF-8" I might be missing something, or not understanding how draid works "behind the scenes". With a ZFS pool using multiple raidz vdevs, it's possible to increase the available storage in a pool by replacing each drive in the raidz vdev. Once the last drive is replaced, either the extra storage space appears automatically, or you run "zpool online -e <poolname> <disk>" for each disk. For example, if you create a pool with 2 raidz vdevs using 6x 1 TB drives per vdev you'll end up with ~ 10 TB of space available to the pool. Later, you can replace all 6 drives in one raidz vdev with 2 TB drives, and get an extra 5 TB of free space in the pool. Later, you can replace the 6 drives in the other raidz vdev with 2 TB drives, and get another 5 TB of free space in the pool. We've been doing this for years, and it works great. When draid became available, we configured our new storage pools using that instead of multiple raidz vdevs. One of the pools uses 44x 2 TB drives, configured in a draid pool using: mnparity: 2 draid_ndata: 4 draid_ngroups: 7 draid_nspares: 2 IIUC, this means the drives are configured in 7 groups of 6, using 4 drives for data and 2 for parity in each group, with 2 drives configured as spares. The pool works great, but we're running out of space. So, we replaced the first 6 drives in the pool with 4 TB drives, expecting to get an extra 4*4=16 TB of free space in the pool. However, to our great surprise, that is not the case! Total storage capacity of the pool has not changed. Even after running "zpool online -e" against each of the 4 TB drives. Do we need to replace EVERY drive in the draid vdev in order to get extra free space in the pool? Or is there some other command that needs to be run to tell ZFS to use the extra storage space available? Or ... ? Usually, we just replace drives in groups of 6, going from 1 TB to 2 TB to 4 TB as needed. Having to buy 44 (or 88 in our other draid-using storage server) and replace them all at once is going to be a massive (and expensive) undertaking! That might be enough to rethink how we use draid going forward. :( -- Freddie Cash fjwcash@gmail.com --00000000000033a07205facc4601 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">I might be missing something, or not understanding how dra= id works "behind the scenes".<div><br></div><div>With a ZFS pool = using multiple raidz vdevs, it's possible to increase the available sto= rage in a pool by replacing each drive in the raidz vdev.=C2=A0 Once the la= st drive is replaced, either the extra storage space appears automatically,= or you run "zpool online -e <poolname> <disk>" for e= ach disk.</div><div><br></div><div>For example, if you create a pool with 2= raidz vdevs using 6x 1 TB drives per vdev you'll end up with ~ 10 TB o= f space available to the pool.=C2=A0 Later, you=C2=A0can replace all 6 driv= es in one raidz vdev with 2 TB drives, and get an extra 5 TB of free space = in the pool.=C2=A0 Later, you can replace the 6 drives in the other raidz v= dev with 2 TB drives, and get another 5 TB of free space in the pool.</div>= <div><br></div><div>We've been doing this for years, and it works great= .</div><div><br></div><div>When draid became available, we configured our n= ew storage pools using that instead of multiple raidz vdevs.=C2=A0 One of t= he pools uses 44x 2 TB drives, configured in a draid pool using:</div><div>= mnparity: 2<br>draid_ndata: 4<br>draid_ngroups: 7<br><div>draid_nspares: 2<= br></div><div><br></div><div>IIUC, this means the drives are configured in = 7 groups of 6, using 4 drives for data and 2 for parity in each group, with= 2 drives configured as spares.</div><div><br></div><div>The pool works gre= at, but we're running out of space.=C2=A0 So, we replaced the first 6 d= rives in the pool with 4 TB drives, expecting to get an extra 4*4=3D16 TB o= f free space in the pool.=C2=A0 However, to our great surprise, that is not= the case!=C2=A0 Total storage capacity of the pool has not changed.=C2=A0 = Even after running "zpool online -e" against each of the 4 TB dri= ves.</div><div><br></div><div>Do we need to replace EVERY drive in the drai= d vdev in order to get extra free space in the pool?=C2=A0 Or is there some= other command that needs to be run to tell ZFS to use the extra storage sp= ace available?=C2=A0 Or ... ?</div><div><br></div><div>Usually, we just rep= lace drives in groups of 6, going from 1 TB to 2 TB to 4 TB as needed.=C2= =A0 Having to buy 44 (or 88 in our other draid-using storage server) and re= place them all at once is going to be a massive (and expensive) undertaking= !=C2=A0 That might be enough to rethink how we use draid going forward.=C2= =A0 :(</div><div><br></div><span class=3D"gmail_signature_prefix">-- </span= ><br><div dir=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_sig= nature">Freddie Cash<br><a href=3D"mailto:fjwcash@gmail.com" target=3D"_bla= nk">fjwcash@gmail.com</a></div></div></div> --00000000000033a07205facc4601--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ7Fcnu=MfKY57gWz1g%2BFiaTFK-6-am5Wy0afDx6X2p_xg>