From owner-freebsd-questions@freebsd.org Wed Jul 3 14:31:52 2019 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C68A315D5619 for ; Wed, 3 Jul 2019 14:31:51 +0000 (UTC) (envelope-from markand@malikania.fr) Received: from smtp.smtpout.orange.fr (smtp12.smtpout.orange.fr [80.12.242.134]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client CN "Bizanga Labs SMTP Client Certificate", Issuer "Bizanga Labs CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 77DBE75FF0 for ; Wed, 3 Jul 2019 14:31:50 +0000 (UTC) (envelope-from markand@malikania.fr) Received: from postfix.malikania.fr ([5.135.187.121]) by mwinf5d35 with ME id YEXh2000N2dbEiD03EXh5b; Wed, 03 Jul 2019 16:31:42 +0200 X-ME-Helo: postfix.malikania.fr X-ME-Auth: ZGVtZWxpZXIuZGF2aWRAb3JhbmdlLmZy X-ME-Date: Wed, 03 Jul 2019 16:31:42 +0200 X-ME-IP: 5.135.187.121 Received: from [167.3.108.158] (unknown [77.159.242.250]) by postfix.malikania.fr (Postfix) with ESMTPSA id 784CC1874 for ; Wed, 3 Jul 2019 16:31:41 +0200 (CEST) Subject: Re: extremely slow disk I/O after updating to 12.0 To: freebsd-questions@freebsd.org References: <9d31fb68-df3e-76d3-195d-0da9749b0b1d@denninger.net> From: David Demelier Message-ID: <2b3b70f4-630f-b2bf-99fa-ab237b0610d3@malikania.fr> Date: Wed, 3 Jul 2019 16:31:40 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <9d31fb68-df3e-76d3-195d-0da9749b0b1d@denninger.net> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: fr Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 77DBE75FF0 X-Spamd-Bar: ++++ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [4.97 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_SPAM_SHORT(0.95)[0.949,0]; MIME_GOOD(-0.10)[text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-questions@freebsd.org]; TO_DN_NONE(0.00)[]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; RCVD_COUNT_THREE(0.00)[3]; RCVD_TLS_LAST(0.00)[]; MX_GOOD(-0.01)[cached: malikania.fr]; NEURAL_SPAM_LONG(1.00)[1.000,0]; RCVD_IN_DNSWL_NONE(0.00)[134.242.12.80.list.dnswl.org : 127.0.5.0]; NEURAL_SPAM_MEDIUM(0.99)[0.993,0]; R_SPF_NA(0.00)[]; DMARC_NA(0.00)[malikania.fr]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:3215, ipnet:80.12.240.0/20, country:FR]; MID_RHS_MATCH_FROM(0.00)[]; IP_SCORE(1.14)[ip: (3.04), ipnet: 80.12.240.0/20(1.43), asn: 3215(1.23), country: FR(-0.01)] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Jul 2019 14:31:52 -0000 Le 03/07/2019 à 15:51, Karl Denninger a écrit : > On 7/3/2019 08:42, Trond Endrestøl wrote: >> On Wed, 3 Jul 2019 13:34+0200, David Demelier wrote: >> >>> zpool status indicates that the blocksize is erroneous and that I may expect >>> performance degradation. But that much is impressive. Can someone confirm? >>> >>> # zpool status >>> pool: tank >>> state: ONLINE >>> status: One or more devices are configured to use a non-native block size. >>> Expect reduced performance. >>> action: Replace affected devices with devices that support the >>> configured block size, or migrate data to a properly configured >>> pool. >>> scan: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> tank ONLINE 0 0 0 >>> raidz1-0 ONLINE 0 0 0 >>> gpt/zfs0 ONLINE 0 0 0 block size: 512B configured, 4096B native >>> gpt/zfs1 ONLINE 0 0 0 block size: 512B configured, 4096B native >>> >>> errors: No known data errors >>> >>> >>> >>> According to some googling, I must update those pools to change the block >>> size. However there are no many articles on that so I'm a bit afraid of doing >>> this. The zfs0 and zfs1 are in raidz. >>> >>> Any help is very welcome. > > ashift=9 on a 4k native block device is going to do horrible things to > performance.  There's no way to change it on an existing pool, as the > other respondent noted; you will have to back up the data on the pool, > destroy the pool and then re-create it. > > Was this pool originally created with 512b disks and then the drives > were swapped out with a "replace" at some point for advanced-format units? Thanks for your answers. No, it was created almost a decade ago back in 2012 using FreeBSD 9. I don't have the history for these commands but it was something like zpool create tank raidz /dev/gpt/zfs0 /dev/gpt/zfs1 Regards, -- David