Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Mar 2017 16:39:12 +0100
From:      "O. Hartmann" <ohartmann@walstatt.org>
To:        Slawa Olhovchenkov <slw@zxy.spb.ru>
Cc:        Michael Gmelin <freebsd@grem.de>, "O. Hartmann" <ohartmann@walstatt.org>, FreeBSD CURRENT <freebsd-current@freebsd.org>
Subject:   Re: CURRENT: slow like crap! ZFS scrubbing and ports update > 25 min
Message-ID:  <20170323163702.2b920804@thor.intern.walstatt.dynvpn.de>
In-Reply-To: <20170323123805.GH86500@zxy.spb.ru>
References:  <20170322210225.511da375@thor.intern.walstatt.dynvpn.de> <70346774-2E34-49CA-8B62-497BD346CBC8@grem.de> <20170322222524.2db39c65@thor.intern.walstatt.dynvpn.de> <20170323123805.GH86500@zxy.spb.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
--Sig_/f6qZtBew4EzmLIU4EzB0IAI
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Am Thu, 23 Mar 2017 15:38:05 +0300
Slawa Olhovchenkov <slw@zxy.spb.ru> schrieb:

> On Wed, Mar 22, 2017 at 10:25:24PM +0100, O. Hartmann wrote:
>=20
> > Am Wed, 22 Mar 2017 21:10:51 +0100
> > Michael Gmelin <freebsd@grem.de> schrieb:
> >  =20
> > > > On 22 Mar 2017, at 21:02, O. Hartmann <ohartmann@walstatt.org> wrot=
e:
> > > >=20
> > > > CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET =
2017 amd64) is
> > > > annoyingly slow! While scrubbing is working on my 12 GB ZFS volume,
> > > > updating /usr/ports takes >25 min(!). That is an absolute record no=
w.
> > > >=20
> > > > I do an almost  daily update of world and ports tree and have perio=
dic scrubbing
> > > > ZFS volumes every 35 days, as it is defined in /etc/defaults. Prts =
tree hasn't
> > > > grown much, the content of the ZFS volume hasn't changed much (~ 10=
0 GB, its fill
> > > > is about 4 TB now) and this is now for ~ 2 years constant.=20
> > > >=20
> > > > I've experienced before that while scrubbing the ZFS volume, some o=
perations,
> > > > even the update of /usr/ports which resides on that ZFS RAIDZ volum=
e, takes a bit
> > > > longer than usual - but never that long like now!
> > > >=20
> > > > Another box is quite unusable while it is scrubbing and it has been=
 usable times
> > > > before. The change is dramatic ...
> > > >    =20
> > >=20
> > > What do "zpool list", "gstat" and "zpool status" show?
> > >=20
> > >=20
> > >  =20
> > zpool list:
> >=20
> > NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  A=
LTROOT
> > TANK00  10.9T  5.45T  5.42T         -     7%    50%  1.58x  ONLINE  -
> >=20
> > Deduplication is off right now, I had one ZFS filesystem with dedup ena=
bled
> >=20
> > gstat: not shown here, but the drives comprise the volume (4x 3 TB) sho=
w 100% busy
> > each, but one drive is always a bit off (by 10% lower) and this drive i=
s walking
> > through all four drives ada2, ada3, ada4 and ada5. Nothing unusual in t=
hat situation.
> > But the throughput is incredible low, for example ada4:
> >=20
> >  L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
> >  2    174    174   1307   11.4      0      0    0.0   99.4| ada4
> >=20
> > kBps (kilo Bits per second I presume) are peaking at ~ 4800 - 5000. On =
another bos,
> > this is ~ 20x higher! Most time, kBps r and w stay at ~ 500 -600. =20
>=20
> kilo Bytes.
> 174 rps is normal for general 7200 RPM disk. Transfer too low by every
> request is about 1307/174 =3D ~8 KB. Don't know root cause of this. I am
> see raid-z of 4 disk, 8*3 =3D ~24KB per record. May be compession enable
> and zfs use 128KB record size? For this case this is expected
> performance. Use 1MB and higher record size.
>=20

I've shutdown the box over night and rebooted this morning. After checking =
from remote
the output of "zpool status", the throughput was at ~229 MBytes/s - a value=
 I'd expected,
peaking again at ~ 300 MBytes/s. I assume my crap home hardware is not prov=
iding more,
but at this point, everything is as expected. The load, as observed via top=
, showed ~75 -
85% idle (top -S). But anyway, on the other home box with ZFS scrubbing act=
ive, the
drives showed a throughput of ~ 110 MBystes/s and 129 MBytes/s - also value=
 I'd expected.
But the system was really jumpy and the load showed ~ 80% idle (two cores, =
4 threads, 8
GB RAM, the first box mentioned with the larger array has 4 cores/8 threads=
 and 16 GB).

--=20
O. Hartmann

Ich widerspreche der Nutzung oder =C3=9Cbermittlung meiner Daten f=C3=BCr
Werbezwecke oder f=C3=BCr die Markt- oder Meinungsforschung (=C2=A7 28 Abs.=
 4 BDSG).

--Sig_/f6qZtBew4EzmLIU4EzB0IAI
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature

-----BEGIN PGP SIGNATURE-----

iLUEARMKAB0WIQQZVZMzAtwC2T/86TrS528fyFhYlAUCWNPsIAAKCRDS528fyFhY
lOP6AgCQJEnXjgr6Ki8ofjKCc+xH+AfpazF8DhG47PlplClEcWPAImPiBqefiP7S
eYgfU7l6VQt4bvSOiJJUpJjAy6XEAf9Bc1pXhZbL5Apa+8BkFdHWaBRXzyd4QK6z
/hoS74tqs6pK3BQYe724pMdk2TjfikxK+5Y2KzV72GjrjrHQfOsu
=9btC
-----END PGP SIGNATURE-----

--Sig_/f6qZtBew4EzmLIU4EzB0IAI--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20170323163702.2b920804>