Date: Mon, 21 Jan 2019 22:22:15 +0900 (JST) From: Yasuhiro KIMURA <yasu@utahime.org> To: freebsd-questions@freebsd.org Subject: Re: Performance degradation of ZFS? Message-ID: <20190121.222215.1653763436758831663.yasu@utahime.org> In-Reply-To: <102da655-e6c1-ee50-aad5-e3fe55d233de@punkt.de> <20190121123524.3c97bdfb2254e3393a58b831@sohara.org> References: <20190121.210941.2275299827563542964.yasu@utahime.org> <102da655-e6c1-ee50-aad5-e3fe55d233de@punkt.de>
next in thread | previous in thread | raw e-mail | index | archive | help
From: Lars Liedtke <liedtke@punkt.de> Subject: Re: Performance degradation of ZFS? Date: Mon, 21 Jan 2019 13:18:58 +0100 > might your Pool run out of capacity? Note: ZFS performance goes down > from about 85% to 90% upwards. Though I can't remember exactly, capacity of pool was less than 40% according to the output of 'zpool list zroot'. From: Steve O'Hara-Smith <steve@sohara.org> Subject: Re: Performance degradation of ZFS? Date: Mon, 21 Jan 2019 12:35:24 +0000 > How is the pool configured ? How full is it ? How well is the data > striped (are some vdevs fuller than others) ? Are any of the drives showing > trouble ? Do you run a regular scrub ? What does zpool status show ? I created old pool by using installer of 11.0-RELEASE with automatic ZFS configuration. It used only 1 HDD and therefore wasn't striped at all. AFAIK the drive didn't show any trouble. As for scrub I used daily periodic jobs by setting 'daily_scrub_zfs_enable="YES"'. So pool was scrubed every 35 days. And as far as I remember there was no case that 'zpool status' showed scrub repaired any errors. Since I used regular upgrade steps that are written in /usr/src/Makefile when upgrading from 11.0 to 11.1 and from 11.1 and 11.2, I used same zroot pool about 2 years and 2 months. And because I'm using same hardware after clean install of 12.0-RELEASE, hardware is not the reason of performance change. --- Yasuhiro KIMURA
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20190121.222215.1653763436758831663.yasu>