From owner-freebsd-current@freebsd.org Thu Mar 23 12:38:09 2017 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 37792D19552 for ; Thu, 23 Mar 2017 12:38:09 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id ECA4C1B11 for ; Thu, 23 Mar 2017 12:38:08 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1cr20D-000NXU-3q; Thu, 23 Mar 2017 15:38:05 +0300 Date: Thu, 23 Mar 2017 15:38:05 +0300 From: Slawa Olhovchenkov To: "O. Hartmann" Cc: Michael Gmelin , "O. Hartmann" , FreeBSD CURRENT Subject: Re: CURRENT: slow like crap! ZFS scrubbing and ports update > 25 min Message-ID: <20170323123805.GH86500@zxy.spb.ru> References: <20170322210225.511da375@thor.intern.walstatt.dynvpn.de> <70346774-2E34-49CA-8B62-497BD346CBC8@grem.de> <20170322222524.2db39c65@thor.intern.walstatt.dynvpn.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170322222524.2db39c65@thor.intern.walstatt.dynvpn.de> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Mar 2017 12:38:09 -0000 On Wed, Mar 22, 2017 at 10:25:24PM +0100, O. Hartmann wrote: > Am Wed, 22 Mar 2017 21:10:51 +0100 > Michael Gmelin schrieb: > > > > On 22 Mar 2017, at 21:02, O. Hartmann wrote: > > > > > > CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET 2017 amd64) is > > > annoyingly slow! While scrubbing is working on my 12 GB ZFS volume, > > > updating /usr/ports takes >25 min(!). That is an absolute record now. > > > > > > I do an almost daily update of world and ports tree and have periodic scrubbing ZFS > > > volumes every 35 days, as it is defined in /etc/defaults. Prts tree hasn't grown much, > > > the content of the ZFS volume hasn't changed much (~ 100 GB, its fill is about 4 TB > > > now) and this is now for ~ 2 years constant. > > > > > > I've experienced before that while scrubbing the ZFS volume, some operations, even the > > > update of /usr/ports which resides on that ZFS RAIDZ volume, takes a bit longer than > > > usual - but never that long like now! > > > > > > Another box is quite unusable while it is scrubbing and it has been usable times > > > before. The change is dramatic ... > > > > > > > What do "zpool list", "gstat" and "zpool status" show? > > > > > > > zpool list: > > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > TANK00 10.9T 5.45T 5.42T - 7% 50% 1.58x ONLINE - > > Deduplication is off right now, I had one ZFS filesystem with dedup enabled > > gstat: not shown here, but the drives comprise the volume (4x 3 TB) show 100% busy each, > but one drive is always a bit off (by 10% lower) and this drive is walking through all > four drives ada2, ada3, ada4 and ada5. Nothing unusual in that situation. But the > throughput is incredible low, for example ada4: > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 2 174 174 1307 11.4 0 0 0.0 99.4| ada4 > > kBps (kilo Bits per second I presume) are peaking at ~ 4800 - 5000. On another bos, this > is ~ 20x higher! Most time, kBps r and w stay at ~ 500 -600. kilo Bytes. 174 rps is normal for general 7200 RPM disk. Transfer too low by every request is about 1307/174 = ~8 KB. Don't know root cause of this. I am see raid-z of 4 disk, 8*3 = ~24KB per record. May be compession enable and zfs use 128KB record size? For this case this is expected performance. Use 1MB and higher record size.