From owner-freebsd-current@freebsd.org Thu Mar 23 15:39:23 2017 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 78AD0D19E6C for ; Thu, 23 Mar 2017 15:39:23 +0000 (UTC) (envelope-from ohartmann@walstatt.org) Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E09CA1485 for ; Thu, 23 Mar 2017 15:39:22 +0000 (UTC) (envelope-from ohartmann@walstatt.org) Received: from thor.intern.walstatt.dynvpn.de ([78.52.6.237]) by mail.gmx.com (mrgmx003 [212.227.17.190]) with ESMTPSA (Nemesis) id 0MYx2x-1ccyDk0Tb9-00VhTS; Thu, 23 Mar 2017 16:39:19 +0100 Date: Thu, 23 Mar 2017 16:39:12 +0100 From: "O. Hartmann" To: Slawa Olhovchenkov Cc: Michael Gmelin , "O. Hartmann" , FreeBSD CURRENT Subject: Re: CURRENT: slow like crap! ZFS scrubbing and ports update > 25 min Message-ID: <20170323163702.2b920804@thor.intern.walstatt.dynvpn.de> In-Reply-To: <20170323123805.GH86500@zxy.spb.ru> References: <20170322210225.511da375@thor.intern.walstatt.dynvpn.de> <70346774-2E34-49CA-8B62-497BD346CBC8@grem.de> <20170322222524.2db39c65@thor.intern.walstatt.dynvpn.de> <20170323123805.GH86500@zxy.spb.ru> Organization: WALSTATT User-Agent: OutScare 3.1415926 X-Operating-System: ImNotAnOperatingSystem 3.141592527 MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; boundary="Sig_/f6qZtBew4EzmLIU4EzB0IAI"; protocol="application/pgp-signature" X-Provags-ID: V03:K0:48eWmmrbOo7eatUzuNnZ9dHrJsdNT4wRfHLCZBLiu67dJBzsJES 0m5VRsUNEiYA/RMqz/xaAOIz+0SC0ETXWs5tWgVC/ywwMPZoPyfodRZl9pjEGrUGzz6BHpe 9hp0MFGuzjjGQs9qUqzWB31n0sjrez5z3AQCitmsDlh+396C1Smgq8NByHmJHN7LLE0dwCg R/x4vJdfZOsFnNQ02Plug== X-UI-Out-Filterresults: notjunk:1;V01:K0:IRNMi74NOFc=:RrHtzsf4B2+/5MyWI7B7em /PVr6kVh7ZymFyaGw0ZT9hm96TAi/KG7YTCQzCS9F6sIGPxG+J2+kVjFSAAqunVghCspPbioH ge5zT9ha0AiWtHnXtkXlcDmGn745mKappDYecfHYZScpk8k/eroJ3bJ5JDb5nmapQcqYgTwQV q+2HcIlSaNkv+YIhqCTNGNxqEQhWCxe7TeK2Csjo0QFgB2yb/qlD/nDTeQsVDZFDRbs6CI9Fw iIGXmOMffwLYOXuTkpazrxXaKL+eOoqOEZ4wLbYQSQXFVc3QW2pnPsW4ZovN5BGOPVVbVC+Kb 4iArhNbtXo5ZGY/iTJvFXR/DO/Mj5xy1XHuMkpaiRLxv6QPXH2+AOQtzs2NEcNIBgLFKiVOCr Z5EMqy4O8hRcmYZ13TJrUCNZvqpMXFbI2YWEnkSN9z5uL4IjNgt/L/bT7e34fu64/VBwFOdpt dJtsOAwf3Re1H4Ui7DFoLVxycqqiPat/47whsldOP/3NE++YwegHPmjZdEAiP3ekQPZ4XVQ/6 fE0YLjgp5gAuBvh/Q3+WUNFh/M3GCX7Efz/q7uJx0M7FyA+65Swvx/RqUznI/In5GxdWvdgV9 gjMGLjLwDINvidnqgPdTyLar7rkyv5otWTxv5L0bztZugaxq935z9ra4CCwMljzDTJQBGSNJC qWL5FMazIK3qbLGidjzEG1CGpb/5D7bCrfeUtaAXMVTEIeExcaCXIaomfHNtK3hKRbv6Hf5rp EGqzs3p7Lj3qdAoH/Gw8WeIPcisSU5U07Nm3OZgPhcwltWyAfP1l7WFccEI= X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Mar 2017 15:39:23 -0000 --Sig_/f6qZtBew4EzmLIU4EzB0IAI Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Am Thu, 23 Mar 2017 15:38:05 +0300 Slawa Olhovchenkov schrieb: > On Wed, Mar 22, 2017 at 10:25:24PM +0100, O. Hartmann wrote: >=20 > > Am Wed, 22 Mar 2017 21:10:51 +0100 > > Michael Gmelin schrieb: > > =20 > > > > On 22 Mar 2017, at 21:02, O. Hartmann wrot= e: > > > >=20 > > > > CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET = 2017 amd64) is > > > > annoyingly slow! While scrubbing is working on my 12 GB ZFS volume, > > > > updating /usr/ports takes >25 min(!). That is an absolute record no= w. > > > >=20 > > > > I do an almost daily update of world and ports tree and have perio= dic scrubbing > > > > ZFS volumes every 35 days, as it is defined in /etc/defaults. Prts = tree hasn't > > > > grown much, the content of the ZFS volume hasn't changed much (~ 10= 0 GB, its fill > > > > is about 4 TB now) and this is now for ~ 2 years constant.=20 > > > >=20 > > > > I've experienced before that while scrubbing the ZFS volume, some o= perations, > > > > even the update of /usr/ports which resides on that ZFS RAIDZ volum= e, takes a bit > > > > longer than usual - but never that long like now! > > > >=20 > > > > Another box is quite unusable while it is scrubbing and it has been= usable times > > > > before. The change is dramatic ... > > > > =20 > > >=20 > > > What do "zpool list", "gstat" and "zpool status" show? > > >=20 > > >=20 > > > =20 > > zpool list: > >=20 > > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH A= LTROOT > > TANK00 10.9T 5.45T 5.42T - 7% 50% 1.58x ONLINE - > >=20 > > Deduplication is off right now, I had one ZFS filesystem with dedup ena= bled > >=20 > > gstat: not shown here, but the drives comprise the volume (4x 3 TB) sho= w 100% busy > > each, but one drive is always a bit off (by 10% lower) and this drive i= s walking > > through all four drives ada2, ada3, ada4 and ada5. Nothing unusual in t= hat situation. > > But the throughput is incredible low, for example ada4: > >=20 > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > > 2 174 174 1307 11.4 0 0 0.0 99.4| ada4 > >=20 > > kBps (kilo Bits per second I presume) are peaking at ~ 4800 - 5000. On = another bos, > > this is ~ 20x higher! Most time, kBps r and w stay at ~ 500 -600. =20 >=20 > kilo Bytes. > 174 rps is normal for general 7200 RPM disk. Transfer too low by every > request is about 1307/174 =3D ~8 KB. Don't know root cause of this. I am > see raid-z of 4 disk, 8*3 =3D ~24KB per record. May be compession enable > and zfs use 128KB record size? For this case this is expected > performance. Use 1MB and higher record size. >=20 I've shutdown the box over night and rebooted this morning. After checking = from remote the output of "zpool status", the throughput was at ~229 MBytes/s - a value= I'd expected, peaking again at ~ 300 MBytes/s. I assume my crap home hardware is not prov= iding more, but at this point, everything is as expected. The load, as observed via top= , showed ~75 - 85% idle (top -S). But anyway, on the other home box with ZFS scrubbing act= ive, the drives showed a throughput of ~ 110 MBystes/s and 129 MBytes/s - also value= I'd expected. But the system was really jumpy and the load showed ~ 80% idle (two cores, = 4 threads, 8 GB RAM, the first box mentioned with the larger array has 4 cores/8 threads= and 16 GB). --=20 O. Hartmann Ich widerspreche der Nutzung oder =C3=9Cbermittlung meiner Daten f=C3=BCr Werbezwecke oder f=C3=BCr die Markt- oder Meinungsforschung (=C2=A7 28 Abs.= 4 BDSG). --Sig_/f6qZtBew4EzmLIU4EzB0IAI Content-Type: application/pgp-signature Content-Description: OpenPGP digital signature -----BEGIN PGP SIGNATURE----- iLUEARMKAB0WIQQZVZMzAtwC2T/86TrS528fyFhYlAUCWNPsIAAKCRDS528fyFhY lOP6AgCQJEnXjgr6Ki8ofjKCc+xH+AfpazF8DhG47PlplClEcWPAImPiBqefiP7S eYgfU7l6VQt4bvSOiJJUpJjAy6XEAf9Bc1pXhZbL5Apa+8BkFdHWaBRXzyd4QK6z /hoS74tqs6pK3BQYe724pMdk2TjfikxK+5Y2KzV72GjrjrHQfOsu =9btC -----END PGP SIGNATURE----- --Sig_/f6qZtBew4EzmLIU4EzB0IAI--