From nobody Mon Apr 22 19:13:08 2024 X-Original-To: freebsd-stable@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4VNZdy0g8Sz5HjPM for ; Mon, 22 Apr 2024 19:13:10 +0000 (UTC) (envelope-from mike@sentex.net) Received: from smarthost1.sentex.ca (smarthost1.sentex.ca [IPv6:2607:f3e0:0:1::12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smarthost1.sentex.ca", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4VNZdx3Hdzz4fg2; Mon, 22 Apr 2024 19:13:09 +0000 (UTC) (envelope-from mike@sentex.net) Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of mike@sentex.net designates 2607:f3e0:0:1::12 as permitted sender) smtp.mailfrom=mike@sentex.net Received: from pyroxene2a.sentex.ca (pyroxene19.sentex.ca [199.212.134.19]) by smarthost1.sentex.ca (8.17.1/8.16.1) with ESMTPS id 43MJD8Of095344 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=FAIL); Mon, 22 Apr 2024 15:13:08 -0400 (EDT) (envelope-from mike@sentex.net) Received: from [IPV6:2607:f3e0:0:4:486a:7a6f:42a:68f5] ([IPv6:2607:f3e0:0:4:486a:7a6f:42a:68f5]) by pyroxene2a.sentex.ca (8.17.1/8.15.2) with ESMTPS id 43MJD7TW018991 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NO); Mon, 22 Apr 2024 15:13:07 -0400 (EDT) (envelope-from mike@sentex.net) Content-Type: multipart/alternative; boundary="------------TeEYKHRauDcwvPTkUFKj80vS" Message-ID: <01bd9c82-52dd-4983-be08-a9c1810cbaaa@sentex.net> Date: Mon, 22 Apr 2024 15:13:08 -0400 List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: freebsd-stable@freebsd.org Sender: owner-freebsd-stable@FreeBSD.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: block size: 512B configured, 4096B native all of a sudden From: mike tancsa To: =?UTF-8?Q?Dag-Erling_Sm=C3=B8rgrav?= Cc: FreeBSD-STABLE Mailing List References: <9a593ca1-4975-438a-afec-d8dd5199dbcf@sentex.net> <86bk618lyh.fsf@ltc.des.dev> <867cgp8lgu.fsf@ltc.des.dev> <57a1cc6b-2bd2-4292-9fde-f8799ffa291a@sentex.net> <8634rd8l2p.fsf@ltc.des.dev> <86y19575y9.fsf@ltc.des.dev> <9dd52865-6928-41dc-910b-82988fe6b2af@sentex.net> Content-Language: en-US Autocrypt: addr=mike@sentex.net; keydata= xsBNBFywzOMBCACoNFpwi5MeyEREiCeHtbm6pZJI/HnO+wXdCAWtZkS49weOoVyUj5BEXRZP xflV2ib2hflX4nXqhenaNiia4iaZ9ft3I1ebd7GEbGnsWCvAnob5MvDZyStDAuRxPJK1ya/s +6rOvr+eQiXYNVvfBhrCfrtR/esSkitBGxhUkBjOti8QwzD71JVF5YaOjBAs7jZUKyLGj0kW yDg4jUndudWU7G2yc9GwpHJ9aRSUN8e/mWdIogK0v+QBHfv/dsI6zVB7YuxCC9Fx8WPwfhDH VZC4kdYCQWKXrm7yb4TiVdBh5kgvlO9q3js1yYdfR1x8mjK2bH2RSv4bV3zkNmsDCIxjABEB AAHNHW1pa2UgdGFuY3NhIDxtaWtlQHNlbnRleC5uZXQ+wsCOBBMBCAA4FiEEmuvCXT0aY6hs 4SbWeVOEFl5WrMgFAl+pQfkCGwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQeVOEFl5W rMiN6ggAk3H5vk8QnbvGbb4sinxZt/wDetgk0AOR9NRmtTnPaW+sIJEfGBOz47Xih+f7uWJS j+uvc9Ewn2Z7n8z3ZHJlLAByLVLtcNXGoRIGJ27tevfOaNqgJHBPbFOcXCBBFTx4MYMM4iAZ cDT5vsBTSaM36JZFtHZBKkuFEItbA/N8ZQSHKdTYMIA7A3OCLGbJBqloQ8SlW4MkTzKX4u7R yefAYQ0h20x9IqC5Ju8IsYRFacVZconT16KS81IBceO42vXTN0VexbVF2rZIx3v/NT75r6Vw 0FlXVB1lXOHKydRA2NeleS4NEG2vWqy/9Boj0itMfNDlOhkrA/0DcCurMpnpbM7ATQRcsMzk AQgA1Dpo/xWS66MaOJLwA28sKNMwkEk1Yjs+okOXDOu1F+0qvgE8sVmrOOPvvWr4axtKRSG1 t2QUiZ/ZkW/x/+t0nrM39EANV1VncuQZ1ceIiwTJFqGZQ8kb0+BNkwuNVFHRgXm1qzAJweEt RdsCMohB+H7BL5LGCVG5JaU0lqFU9pFP40HxEbyzxjsZgSE8LwkI6wcu0BLv6K6cLm0EiHPO l5G8kgRi38PS7/6s3R8QDsEtbGsYy6O82k3zSLIjuDBwA9GRaeigGppTxzAHVjf5o9KKu4O7 gC2KKVHPegbXS+GK7DU0fjzX57H5bZ6komE5eY4p3oWT/CwVPSGfPs8jOwARAQABwsB2BBgB CAAgFiEEmuvCXT0aY6hs4SbWeVOEFl5WrMgFAl+pQfkCGwwACgkQeVOEFl5WrMiVqwf9GwU8 c6cylknZX8QwlsVudTC8xr/L17JA84wf03k3d4wxP7bqy5AYy7jboZMbgWXngAE/HPQU95NM aukysSnknzoIpC96XZJ0okLBXVS6Y0ylZQ+HrbIhMpuQPoDweoF5F9wKrsHRoDaUK1VR706X rwm4HUzh7Jk+auuMYfuCh0FVlFBEuiJWMLhg/5WCmcRfiuB6F59ZcUQrwLEZeNhF2XJV4KwB Tlg7HCWO/sy1foE5noaMyACjAtAQE9p5kGYaj+DuRhPdWUTsHNuqrhikzIZd2rrcMid+ktb0 NvtvswzMO059z1YGMtGSqQ4srCArju+XHIdTFdiIYbd7+jeehg== In-Reply-To: <9dd52865-6928-41dc-910b-82988fe6b2af@sentex.net> X-Scanned-By: MIMEDefang 2.86 X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.39 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-0.999]; NEURAL_HAM_SHORT(-1.00)[-0.998]; R_SPF_ALLOW(-0.20)[+ip6:2607:f3e0::/32]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; RCVD_IN_DNSWL_LOW(-0.10)[199.212.134.19:received]; XM_UA_NO_VERSION(0.01)[]; ASN(0.00)[asn:11647, ipnet:2607:f3e0::/32, country:CA]; FREEFALL_USER(0.00)[mike]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ARC_NA(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; MID_RHS_MATCH_FROM(0.00)[]; MLMMJ_DEST(0.00)[freebsd-stable@freebsd.org]; DMARC_NA(0.00)[sentex.net]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; R_DKIM_NA(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: 4VNZdx3Hdzz4fg2 This is a multi-part message in MIME format. --------------TeEYKHRauDcwvPTkUFKj80vS Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 4/22/2024 12:05 PM, mike tancsa wrote: > On 4/22/2024 11:49 AM, Dag-Erling Smørgrav wrote: >> mike tancsa writes: >>> I was afraid of that. So basically I have to copy the entire pool or >>> live with the performance penalty :(  73TB is a lot to copy / move. >> Oh I didn't realize those disks were part of the larger pool, I thought >> they were a separate pool named “special”. You might be able to simply >> detach and then re-attach both drives, afaik a draid can function >> without its special vdev. > > Yeah, its an actual special vdev I added to speed up metadata and > small file access.  zpool-detach I think only works on mirrors and not > raidz vdevs.  From searching google, there does not seem to be a way > to detach a special vdev from a raidz pool. > > I think I  confirmed my suspicion with a quick test pool truncate -s 100G /quirk-test/junk.raw truncate -s 100G /quirk-test/junk2.raw mdconfig -f /quirk-test/junk.raw mdconfig -f /quirk-test/junk2.raw gpart create -s gpt md0 gpart create -s gpt md1 gpart add -s 20G -t freebsd-zfs /dev/md0 gpart add -s 20G -t freebsd-zfs /dev/md0 gpart add -s 20G -t freebsd-zfs /dev/md0 gpart add -s 20G -t freebsd-zfs /dev/md1 gpart add -s 20G -t freebsd-zfs /dev/md1 gpart add -s 20G -t freebsd-zfs /dev/md1 zpool create testpool raidz1 /dev/md0p1 /dev/md1p1 /dev/md0p2 /dev/md1p2 zpool add testpool special mirror /dev/md0p3 /dev/md1p3 zpool status   pool: testpool  state: ONLINE config:         NAME        STATE     READ WRITE CKSUM         testpool    ONLINE       0     0     0           raidz1-0  ONLINE       0     0     0             md0p1   ONLINE       0     0     0             md1p1   ONLINE       0     0     0             md0p2   ONLINE       0     0     0             md1p2   ONLINE       0     0     0         special           mirror-1  ONLINE       0     0     0             md0p3   ONLINE       0     0     0             md1p3   ONLINE       0     0     0  # zpool remove testpool mirror-1 cannot remove mirror-1: invalid config; all top-level vdevs must have the same sector size and not be raidz. zpool detach just nukes the mirror I think for the special vdev #  zpool detach testpool /dev/md1p3 # zpool status   pool: quirk-test  state: ONLINE   scan: resilvered 128G in 00:08:52 with 0 errors on Mon Apr  8 14:37:29 2024 config:         NAME        STATE     READ WRITE CKSUM         quirk-test  ONLINE       0     0     0           raidz1-0  ONLINE       0     0     0             da4p1   ONLINE       0     0     0             da1p1   ONLINE       0     0     0             da2p1   ONLINE       0     0     0             da5p1   ONLINE       0     0     0 errors: No known data errors   pool: testpool  state: ONLINE config:         NAME        STATE     READ WRITE CKSUM         testpool    ONLINE       0     0     0           raidz1-0  ONLINE       0     0     0             md0p1   ONLINE       0     0     0             md1p1   ONLINE       0     0     0             md0p2   ONLINE       0     0     0             md1p2   ONLINE       0     0     0         special           md0p3     ONLINE       0     0     0 errors: No known data errors  # zpool remove testpool /dev/md0p3 cannot remove /dev/md0p3: invalid config; all top-level vdevs must have the same sector size and not be raidz.  # zpool detach testpool /dev/md0p3 cannot detach /dev/md0p3: only applicable to mirror and replacing vdevs     ---Mike --------------TeEYKHRauDcwvPTkUFKj80vS Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit
On 4/22/2024 12:05 PM, mike tancsa wrote:
On 4/22/2024 11:49 AM, Dag-Erling Smørgrav wrote:
mike tancsa <mike@sentex.net> writes:
I was afraid of that. So basically I have to copy the entire pool or
live with the performance penalty :(  73TB is a lot to copy / move.
Oh I didn't realize those disks were part of the larger pool, I thought
they were a separate pool named “special”.  You might be able to simply
detach and then re-attach both drives, afaik a draid can function
without its special vdev.

Yeah, its an actual special vdev I added to speed up metadata and small file access.  zpool-detach I think only works on mirrors and not raidz vdevs.  From searching google, there does not seem to be a way to detach a special vdev from a raidz pool.


I think I  confirmed my suspicion with a quick test pool

truncate -s 100G /quirk-test/junk.raw
truncate -s 100G /quirk-test/junk2.raw

mdconfig -f /quirk-test/junk.raw
mdconfig -f /quirk-test/junk2.raw
gpart create -s gpt md0
gpart create -s gpt md1
gpart add -s 20G -t freebsd-zfs /dev/md0
gpart add -s 20G -t freebsd-zfs /dev/md0
gpart add -s 20G -t freebsd-zfs /dev/md0
gpart add -s 20G -t freebsd-zfs /dev/md1
gpart add -s 20G -t freebsd-zfs /dev/md1
gpart add -s 20G -t freebsd-zfs /dev/md1
zpool create testpool raidz1 /dev/md0p1 /dev/md1p1 /dev/md0p2 /dev/md1p2
zpool add testpool special mirror /dev/md0p3 /dev/md1p3
zpool status


  pool: testpool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            md0p1   ONLINE       0     0     0
            md1p1   ONLINE       0     0     0
            md0p2   ONLINE       0     0     0
            md1p2   ONLINE       0     0     0
        special
          mirror-1  ONLINE       0     0     0
            md0p3   ONLINE       0     0     0
            md1p3   ONLINE       0     0     0


 # zpool remove testpool mirror-1
cannot remove mirror-1: invalid config; all top-level vdevs must have the same sector size and not be raidz.

zpool detach just nukes the mirror I think for the special vdev

#  zpool detach testpool /dev/md1p3

# zpool status
  pool: quirk-test
 state: ONLINE
  scan: resilvered 128G in 00:08:52 with 0 errors on Mon Apr  8 14:37:29 2024
config:

        NAME        STATE     READ WRITE CKSUM
        quirk-test  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da4p1   ONLINE       0     0     0
            da1p1   ONLINE       0     0     0
            da2p1   ONLINE       0     0     0
            da5p1   ONLINE       0     0     0

errors: No known data errors

  pool: testpool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            md0p1   ONLINE       0     0     0
            md1p1   ONLINE       0     0     0
            md0p2   ONLINE       0     0     0
            md1p2   ONLINE       0     0     0
        special
          md0p3     ONLINE       0     0     0

errors: No known data errors

 # zpool remove testpool /dev/md0p3
cannot remove /dev/md0p3: invalid config; all top-level vdevs must have the same sector size and not be raidz.

 # zpool detach testpool /dev/md0p3
cannot detach /dev/md0p3: only applicable to mirror and replacing vdevs


    ---Mike


--------------TeEYKHRauDcwvPTkUFKj80vS--