From owner-freebsd-questions@freebsd.org Mon Jan 21 13:22:41 2019 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E63D914A97C1 for ; Mon, 21 Jan 2019 13:22:40 +0000 (UTC) (envelope-from yasu@utahime.org) Received: from gate.utahime.jp (ipq210.utahime.jp [183.180.29.210]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0ED608905F for ; Mon, 21 Jan 2019 13:22:38 +0000 (UTC) (envelope-from yasu@utahime.org) Received: from eastasia.home.utahime.org (eastasia.home.utahime.org [192.168.174.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by gate.utahime.jp (Postfix) with ESMTPS id 9E78D7A1D; Mon, 21 Jan 2019 22:22:32 +0900 (JST) Received: from eastasia.home.utahime.org (localhost [127.0.0.1]) by localhost-backdoor.home.utahime.org (Postfix) with ESMTP id E069553686; Mon, 21 Jan 2019 22:22:31 +0900 (JST) Received: from localhost (rolling.home.utahime.org [192.168.174.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by eastasia.home.utahime.org (Postfix) with ESMTPSA id DFB0B53685; Mon, 21 Jan 2019 22:22:30 +0900 (JST) Date: Mon, 21 Jan 2019 22:22:15 +0900 (JST) Message-Id: <20190121.222215.1653763436758831663.yasu@utahime.org> To: freebsd-questions@freebsd.org Subject: Re: Performance degradation of ZFS? From: Yasuhiro KIMURA In-Reply-To: <102da655-e6c1-ee50-aad5-e3fe55d233de@punkt.de> <20190121123524.3c97bdfb2254e3393a58b831@sohara.org> References: <20190121.210941.2275299827563542964.yasu@utahime.org> <102da655-e6c1-ee50-aad5-e3fe55d233de@punkt.de> X-Mailer: Mew version 6.8 on Emacs 26.1 Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV using ClamSMTP X-Rspamd-Queue-Id: 0ED608905F X-Spamd-Bar: +++ Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of yasu@utahime.org designates 183.180.29.210 as permitted sender) smtp.mailfrom=yasu@utahime.org X-Spamd-Result: default: False [3.42 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.08)[-0.084,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:spf-authorized.utahime.org]; MV_CASE(0.50)[]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; DMARC_NA(0.00)[utahime.org]; RCPT_COUNT_ONE(0.00)[1]; RCVD_COUNT_THREE(0.00)[4]; SUBJECT_ENDS_QUESTION(1.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MX_GOOD(-0.01)[cached: mx1.utahime.org]; NEURAL_SPAM_LONG(0.38)[0.377,0]; MID_CONTAINS_FROM(1.00)[]; NEURAL_SPAM_SHORT(0.77)[0.767,0]; IP_SCORE(0.17)[ip: (0.39), ipnet: 183.180.0.0/16(0.19), asn: 2519(0.36), country: JP(-0.08)]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:2519, ipnet:183.180.0.0/16, country:JP]; RCVD_TLS_LAST(0.00)[] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Jan 2019 13:22:41 -0000 From: Lars Liedtke Subject: Re: Performance degradation of ZFS? Date: Mon, 21 Jan 2019 13:18:58 +0100 > might your Pool run out of capacity? Note: ZFS performance goes down > from about 85% to 90% upwards. Though I can't remember exactly, capacity of pool was less than 40% according to the output of 'zpool list zroot'. From: Steve O'Hara-Smith Subject: Re: Performance degradation of ZFS? Date: Mon, 21 Jan 2019 12:35:24 +0000 > How is the pool configured ? How full is it ? How well is the data > striped (are some vdevs fuller than others) ? Are any of the drives showing > trouble ? Do you run a regular scrub ? What does zpool status show ? I created old pool by using installer of 11.0-RELEASE with automatic ZFS configuration. It used only 1 HDD and therefore wasn't striped at all. AFAIK the drive didn't show any trouble. As for scrub I used daily periodic jobs by setting 'daily_scrub_zfs_enable="YES"'. So pool was scrubed every 35 days. And as far as I remember there was no case that 'zpool status' showed scrub repaired any errors. Since I used regular upgrade steps that are written in /usr/src/Makefile when upgrading from 11.0 to 11.1 and from 11.1 and 11.2, I used same zroot pool about 2 years and 2 months. And because I'm using same hardware after clean install of 12.0-RELEASE, hardware is not the reason of performance change. --- Yasuhiro KIMURA