From nobody Fri May 19 16:37:34 2023 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4QNCF54YKdz4C6xd for ; Fri, 19 May 2023 16:37:45 +0000 (UTC) (envelope-from crest@rlwinm.de) Received: from mail.rlwinm.de (mail.rlwinm.de [138.201.35.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4QNCF42HfTz4G61 for ; Fri, 19 May 2023 16:37:44 +0000 (UTC) (envelope-from crest@rlwinm.de) Authentication-Results: mx1.freebsd.org; dkim=none; spf=pass (mx1.freebsd.org: domain of crest@rlwinm.de designates 138.201.35.217 as permitted sender) smtp.mailfrom=crest@rlwinm.de; dmarc=none Received: from [IPV6:2001:9e8:950:9f00:b107:4755:2829:6736] (unknown [IPv6:2001:9e8:950:9f00:b107:4755:2829:6736]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.rlwinm.de (Postfix) with ESMTPSA id D750A12F66 for ; Fri, 19 May 2023 16:37:35 +0000 (UTC) Message-ID: <9aa8b4d7-cd0c-59f0-6a56-3b37e19a5022@rlwinm.de> Date: Fri, 19 May 2023 18:37:34 +0200 List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: Expanding storage in a ZFS pool using draid Content-Language: en-US To: freebsd-fs@freebsd.org References: From: Jan Bramkamp In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spamd-Result: default: False [-3.10 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_LONG(-1.00)[-0.997]; NEURAL_HAM_SHORT(-0.80)[-0.799]; R_SPF_ALLOW(-0.20)[+mx]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; ASN(0.00)[asn:24940, ipnet:138.201.0.0/16, country:DE]; R_DKIM_NA(0.00)[]; MLMMJ_DEST(0.00)[freebsd-fs@freebsd.org]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[rlwinm.de]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; MID_RHS_MATCH_FROM(0.00)[]; ARC_NA(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; TO_DN_NONE(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-fs@freebsd.org]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: 4QNCF42HfTz4G61 X-Spamd-Bar: --- X-ThisMailContainsUnwantedMimeParts: N On 03.05.23 18:08, Freddie Cash wrote: > I might be missing something, or not understanding how draid works > "behind the scenes". > > With a ZFS pool using multiple raidz vdevs, it's possible to increase > the available storage in a pool by replacing each drive in the raidz > vdev.  Once the last drive is replaced, either the extra storage space > appears automatically, or you run "zpool online -e " > for each disk. > > For example, if you create a pool with 2 raidz vdevs using 6x 1 TB > drives per vdev you'll end up with ~ 10 TB of space available to the > pool.  Later, you can replace all 6 drives in one raidz vdev with 2 TB > drives, and get an extra 5 TB of free space in the pool.  Later, you > can replace the 6 drives in the other raidz vdev with 2 TB drives, and > get another 5 TB of free space in the pool. > > We've been doing this for years, and it works great. > > When draid became available, we configured our new storage pools using > that instead of multiple raidz vdevs.  One of the pools uses 44x 2 TB > drives, configured in a draid pool using: > mnparity: 2 > draid_ndata: 4 > draid_ngroups: 7 > draid_nspares: 2 > > IIUC, this means the drives are configured in 7 groups of 6, using 4 > drives for data and 2 for parity in each group, with 2 drives > configured as spares. > > The pool works great, but we're running out of space. So, we replaced > the first 6 drives in the pool with 4 TB drives, expecting to get an > extra 4*4=16 TB of free space in the pool.  However, to our great > surprise, that is not the case!  Total storage capacity of the pool > has not changed. Even after running "zpool online -e" against each of > the 4 TB drives. > > Do we need to replace EVERY drive in the draid vdev in order to get > extra free space in the pool?  Or is there some other command that > needs to be run to tell ZFS to use the extra storage space available?  > Or ... ? > > Usually, we just replace drives in groups of 6, going from 1 TB to 2 > TB to 4 TB as needed.  Having to buy 44 (or 88 in our other > draid-using storage server) and replace them all at once is going to > be a massive (and expensive) undertaking!  That might be enough to > rethink how we use draid going forward.  :( This is not going to work with dRAID. All devices need to be the same size (it uses the size of the smallest device on all devices.).