From nobody Wed May 3 16:08:50 2023 X-Original-To: freebsd-stable@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4QBMMP1lzlz490lL; Wed, 3 May 2023 16:09:05 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yb1-xb36.google.com (mail-yb1-xb36.google.com [IPv6:2607:f8b0:4864:20::b36]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4QBMMN1l0cz4Dl2; Wed, 3 May 2023 16:09:04 +0000 (UTC) (envelope-from fjwcash@gmail.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20221208 header.b=gDaH07KS; spf=pass (mx1.freebsd.org: domain of fjwcash@gmail.com designates 2607:f8b0:4864:20::b36 as permitted sender) smtp.mailfrom=fjwcash@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-yb1-xb36.google.com with SMTP id 3f1490d57ef6-b980e16b27bso4515366276.2; Wed, 03 May 2023 09:09:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683130142; x=1685722142; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=b8qLoN8gGnsVoHFsh9mJPRR8gEzlLQqtJz7l/wk8u4I=; b=gDaH07KS1RmhxgkNfla+TfC0wOKAN0IVZCpYWqVwT/6Wd5lv68SISo6PyRcjvWlMyY vzISvi8NXhskM7K+ryYxjDkmzY5Bpfto2dR6ITlBKOsa9+qXBn+VXFVtPVAQ5QzykIHf Qd4JvheA1mjVZfIZjF23B6hWiYY3e5OdCtXoQAjBBezdmBe0kSJWHT+2EoIwGPLVepZD xmRwIxAAnADbvnWmMQXvmOXUsc4+gCj3nGxDxEgQKCXjiJOefmqlQPDhGg5HEgbrPVZW zJgks37xYDUyIBm1A7w3kz2vQrkZ7jM/PdSsa17pYASctUENeCrL7cyG03Vba2LKbZ1e XQ7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683130142; x=1685722142; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=b8qLoN8gGnsVoHFsh9mJPRR8gEzlLQqtJz7l/wk8u4I=; b=cexKqFR+eIn8nStSDNWBpZ2XIUa4F6lOb2nitDgUEaqhGRcClP5C9fL3g3hwToNm7y wI26yWk85DcvfK8HL1WvtNuDhujsU2gbLKaDCfVfq2zlD+Dj1Erjtm78NZtA7pfU7aWa bLBf7hcwOoKEaIBNUQ5qZuh8CFt8rGjuLMGkdPUUeAMiXsZ9r1JiiPIhHlFlgYCqEfk+ cAuOydAlqPc5A30zmS2tb6XgGVcCkzfqcN0+WBjVvgBruSC3XrluUAHmOrwmtuM3djgn de8fMwZW03DpmGPArZ0nSNOv7RLRnUstCl4X0IO27zLbf6F3JMF6QM0wlZLoPdLTdRIs qu5g== X-Gm-Message-State: AC+VfDxRHZcqNu4rJReNnrPbyfPDJF0N8ONUT6CxS0Y6niT0c6y4c+zn lK4DzCa7GAAzQv1h4rEzXGAJsPZ/n0WFDMkeA0EqupT8QPk= X-Google-Smtp-Source: ACHHUZ4cetGZc98YxAJt/Y1l1+0WuTM8XTPJ08WdS16vkDO556KCv3Um1DfRmo2LznBoKGrpBcBTWjKgSc9D8Zq/Cm8= X-Received: by 2002:a25:11c3:0:b0:b9e:7224:a5d4 with SMTP id 186-20020a2511c3000000b00b9e7224a5d4mr5149489ybr.61.1683130141612; Wed, 03 May 2023 09:09:01 -0700 (PDT) List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-stable@freebsd.org X-BeenThere: freebsd-stable@freebsd.org MIME-Version: 1.0 From: Freddie Cash Date: Wed, 3 May 2023 09:08:50 -0700 Message-ID: Subject: Expanding storage in a ZFS pool using draid To: FreeBSD Filesystems , FreeBSD Stable Content-Type: multipart/alternative; boundary="00000000000033a07205facc4601" X-Spamd-Result: default: False [-3.80 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.80)[-0.798]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20221208]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; MLMMJ_DEST(0.00)[freebsd-fs@freebsd.org,freebsd-stable@freebsd.org]; RCVD_TLS_LAST(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FREEMAIL_ENVFROM(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; RCVD_IN_DNSWL_NONE(0.00)[2607:f8b0:4864:20::b36:from]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; FREEMAIL_FROM(0.00)[gmail.com]; MID_RHS_MATCH_FROMTLD(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DWL_DNSWL_NONE(0.00)[gmail.com:dkim] X-Rspamd-Queue-Id: 4QBMMN1l0cz4Dl2 X-Spamd-Bar: --- X-ThisMailContainsUnwantedMimeParts: N --00000000000033a07205facc4601 Content-Type: text/plain; charset="UTF-8" I might be missing something, or not understanding how draid works "behind the scenes". With a ZFS pool using multiple raidz vdevs, it's possible to increase the available storage in a pool by replacing each drive in the raidz vdev. Once the last drive is replaced, either the extra storage space appears automatically, or you run "zpool online -e " for each disk. For example, if you create a pool with 2 raidz vdevs using 6x 1 TB drives per vdev you'll end up with ~ 10 TB of space available to the pool. Later, you can replace all 6 drives in one raidz vdev with 2 TB drives, and get an extra 5 TB of free space in the pool. Later, you can replace the 6 drives in the other raidz vdev with 2 TB drives, and get another 5 TB of free space in the pool. We've been doing this for years, and it works great. When draid became available, we configured our new storage pools using that instead of multiple raidz vdevs. One of the pools uses 44x 2 TB drives, configured in a draid pool using: mnparity: 2 draid_ndata: 4 draid_ngroups: 7 draid_nspares: 2 IIUC, this means the drives are configured in 7 groups of 6, using 4 drives for data and 2 for parity in each group, with 2 drives configured as spares. The pool works great, but we're running out of space. So, we replaced the first 6 drives in the pool with 4 TB drives, expecting to get an extra 4*4=16 TB of free space in the pool. However, to our great surprise, that is not the case! Total storage capacity of the pool has not changed. Even after running "zpool online -e" against each of the 4 TB drives. Do we need to replace EVERY drive in the draid vdev in order to get extra free space in the pool? Or is there some other command that needs to be run to tell ZFS to use the extra storage space available? Or ... ? Usually, we just replace drives in groups of 6, going from 1 TB to 2 TB to 4 TB as needed. Having to buy 44 (or 88 in our other draid-using storage server) and replace them all at once is going to be a massive (and expensive) undertaking! That might be enough to rethink how we use draid going forward. :( -- Freddie Cash fjwcash@gmail.com --00000000000033a07205facc4601 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I might be missing something, or not understanding how dra= id works "behind the scenes".

With a ZFS pool = using multiple raidz vdevs, it's possible to increase the available sto= rage in a pool by replacing each drive in the raidz vdev.=C2=A0 Once the la= st drive is replaced, either the extra storage space appears automatically,= or you run "zpool online -e <poolname> <disk>" for e= ach disk.

For example, if you create a pool with 2= raidz vdevs using 6x 1 TB drives per vdev you'll end up with ~ 10 TB o= f space available to the pool.=C2=A0 Later, you=C2=A0can replace all 6 driv= es in one raidz vdev with 2 TB drives, and get an extra 5 TB of free space = in the pool.=C2=A0 Later, you can replace the 6 drives in the other raidz v= dev with 2 TB drives, and get another 5 TB of free space in the pool.
=

We've been doing this for years, and it works great= .

When draid became available, we configured our n= ew storage pools using that instead of multiple raidz vdevs.=C2=A0 One of t= he pools uses 44x 2 TB drives, configured in a draid pool using:
= mnparity: 2
draid_ndata: 4
draid_ngroups: 7
draid_nspares: 2<= br>

IIUC, this means the drives are configured in = 7 groups of 6, using 4 drives for data and 2 for parity in each group, with= 2 drives configured as spares.

The pool works gre= at, but we're running out of space.=C2=A0 So, we replaced the first 6 d= rives in the pool with 4 TB drives, expecting to get an extra 4*4=3D16 TB o= f free space in the pool.=C2=A0 However, to our great surprise, that is not= the case!=C2=A0 Total storage capacity of the pool has not changed.=C2=A0 = Even after running "zpool online -e" against each of the 4 TB dri= ves.

Do we need to replace EVERY drive in the drai= d vdev in order to get extra free space in the pool?=C2=A0 Or is there some= other command that needs to be run to tell ZFS to use the extra storage sp= ace available?=C2=A0 Or ... ?

Usually, we just rep= lace drives in groups of 6, going from 1 TB to 2 TB to 4 TB as needed.=C2= =A0 Having to buy 44 (or 88 in our other draid-using storage server) and re= place them all at once is going to be a massive (and expensive) undertaking= !=C2=A0 That might be enough to rethink how we use draid going forward.=C2= =A0 :(

--
Freddie Cash
fjwcash@gmail.com
--00000000000033a07205facc4601--