From owner-freebsd-questions@freebsd.org Wed Jul 3 13:43:06 2019 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5FDDB15D442C for ; Wed, 3 Jul 2019 13:43:06 +0000 (UTC) (envelope-from trond.endrestol@ximalas.info) Received: from enterprise.ximalas.info (enterprise.ximalas.info [IPv6:2001:700:1100:1::8]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "ximalas.info", Issuer "Hostmaster ximalas.info" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id B7BA873C9D for ; Wed, 3 Jul 2019 13:43:05 +0000 (UTC) (envelope-from trond.endrestol@ximalas.info) Received: from enterprise.ximalas.info (Ximalas@localhost [127.0.0.1]) by enterprise.ximalas.info (8.15.2/8.15.2) with ESMTPS id x63Dgwc8049716 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO) for ; Wed, 3 Jul 2019 15:42:58 +0200 (CEST) (envelope-from trond.endrestol@ximalas.info) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ximalas.info; s=default; t=1562161378; bh=8W3iWYICafpYzJ0VqAYSDRjdeaduAoTZEehtBhJsZZA=; h=Date:From:To:Subject:In-Reply-To:References; b=oUX4fc6JgGvKLW9J7yYXCIpsaah9QhM1WTY43i6DZ3AD87R8X07jwIWBIth1lfx+T STFCzFDk/OIk/JhePH/1/2b7js17q3PrK/y5LS5WSACv2sCIEZzUTCXBrWiE8pP52y Qnaig04qtvBDLw3mTq5fMMYF2wdhKs24xHQ9cYHbQ6mTHrPCvSKxO6dT7v99c+WZcL WFYd1f1geAShl+KN0oP+fWsfue1+r+3IvZ7fDzc+eijYGJd7bqDtSrCkhAgptEhtj6 7gYSpo3S09MYrSQj3x71QK998Kex5VAVukHNuo79d/GiFvFpstiBNuHCsH/1LqgUda FHhRy1df0A0TA== Received: from localhost (trond@localhost) by enterprise.ximalas.info (8.15.2/8.15.2/Submit) with ESMTP id x63DgwEH049713 for ; Wed, 3 Jul 2019 15:42:58 +0200 (CEST) (envelope-from trond.endrestol@ximalas.info) X-Authentication-Warning: enterprise.ximalas.info: trond owned process doing -bs Date: Wed, 3 Jul 2019 15:42:58 +0200 (CEST) From: =?UTF-8?Q?Trond_Endrest=C3=B8l?= Sender: Trond.Endrestol@ximalas.info To: freebsd-questions@freebsd.org Subject: Re: extremely slow disk I/O after updating to 12.0 In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21.9999 (BSF 287 2018-06-16) OpenPGP: url=http://ximalas.info/about/tronds-openpgp-public-key MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-1.2 required=5.0 tests=ALL_TRUSTED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF autolearn=unavailable autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on enterprise.ximalas.info X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Jul 2019 13:43:06 -0000 On Wed, 3 Jul 2019 13:34+0200, David Demelier wrote: > zpool status indicates that the blocksize is erroneous and that I may expect > performance degradation. But that much is impressive. Can someone confirm? > > # zpool status > pool: tank > state: ONLINE > status: One or more devices are configured to use a non-native block size. > Expect reduced performance. > action: Replace affected devices with devices that support the > configured block size, or migrate data to a properly configured > pool. > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > gpt/zfs0 ONLINE 0 0 0 block size: 512B configured, 4096B native > gpt/zfs1 ONLINE 0 0 0 block size: 512B configured, 4096B native > > errors: No known data errors > > > > According to some googling, I must update those pools to change the block > size. However there are no many articles on that so I'm a bit afraid of doing > this. The zfs0 and zfs1 are in raidz. > > Any help is very welcome. If you want to change the block size, I'm afraid you must backup your data somewhere, destroy tank, and recreate it after you set: sysctl vfs.zfs.min_auto_ashift=12 If you only deal with 4Kn drives, then I suggest you edit /etc/sysctl.conf, adding for future use: vfs.zfs.min_auto_ashift=12 Options range from replicating the data on another computer, simply as a file (do this twice while saving to a different filename each time), or receiving and unpacking the zstream on another computer's zpool, or migrating to a new pair of disks. Here's my outline for doing the ZFS transfer: == Prepare computer B for receiving the zstream: nc -l 1234 > some.file.zfs Or, still on computer B: nc -l 1234 | zfs recv -Fduv somepool # Optional, to be done after the transfer: zfs destroy -Rv somepool@transfer In the latter case, existing filesystems beneath the toplevel filesystem in somepool will be replaced by whatever is in the zstream. Filesystems with "pathnames" unique to somepool will be unaffected. On computer A: zfs snap tank@transfer zfs send -RLev tank@transfer | nc -N computer.B.some.domain 1234 zfs destroy -Rv tank@transfer == Feel free to replace nc (netcat) with ssh or something else. == zfs send and zfs recv can be piped together if the pools are connected to the same computer: zfs send -RLev tank@transfer | zfs recv -Fduv newtank newtank can be renamed simply by exporting it and importing it using its current and desired name: zpool export newtank zpool import -N newtank tank Note, this must be done while running FreeBSD from some other media, such as a DVD or a memstick. Take care to ensure the bootfs pool property is pointing to the correct BE before rebooting. == To transfer the data back to the new tank pool: Prepare computer A for receiving the zstream: nc -l 1234 | zfs recv -Fduv tank # Do these two commands after the transfer: zfs destroy -Rv tank@transfer zpool set bootfs=tank/the/correct/boot/environment tank On computer B: nc -N computer.A.some.domain 1234 < some.file.zfs Or, still on computer B: zfs snap somepool@transfer # If you removed the previous @transfer snapshot zfs send -RLev somepool@transfer | nc -N computer.A.some.domain 1234 -- Trond.