Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 26 Apr 2016 22:33:44 -0400
From:      PK1048 <paul@pk1048.com>
To:        Andy Farkas <andyf@andyit.com.au>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: How to speed up slow zpool scrub?
Message-ID:  <56C0A956-F134-4A8D-A8B6-B93DCA045BE4@pk1048.com>
In-Reply-To: <571FEB34.7040305@andyit.com.au>
References:  <698816653.2698619.1461685653634.JavaMail.yahoo.ref@mail.yahoo.com> <698816653.2698619.1461685653634.JavaMail.yahoo@mail.yahoo.com> <571F9897.2070008@quip.cz> <571FEB34.7040305@andyit.com.au>

next in thread | previous in thread | raw e-mail | index | archive | help

> On Apr 26, 2016, at 18:27, Andy Farkas <andyf@andyit.com.au> wrote:
>=20
> On 27/04/2016 02:34, Miroslav Lachman wrote:
>> DH wrote on 04/26/2016 17:47:
>>>> 5GB of RAM
>>>=20
>>> That seems to be an insufficient amount of system ram when employing =
zfs.
>>>=20
>>> Take a look at this:
>>>=20
>>> http://doc.freenas.org/9.3/freenas_intro.html#ram
>>=20
>> I know 5GB is not much these days but is memory used for scrubbing a =
lot? Because I am satisfied with working performance. The only concern =
is slow scrubbing and I am not sure that more memory helps in this case.

I don=E2=80=99t expect memory to make a big difference in scrub or =
resilver performance. Rememeber the way ZFS uses memory is as both write =
buffer and read cache (all in the ARC). So insufficient memory will hurt =
real performance but it should not have any real effect on a scrub or =
resilver (which are both operations that read all the data that has been =
written to the zpool and check it against the metadata for consistency).

Scrubs (and resilver) operations are essentially all random I/O. Those =
drives are low end, low performance, desktop drives.

The fact that the scrub _repaired_ anything means that there was damage =
to data. If all of the data on the drives is good, then a scrub has =
nothing to repair.

What does an `iostat -x 1` show during the scrub ? How about a `zpool =
iostat -v 1` ? How hard are you hitting those drives and are they all =
really good ? I have seen svc_t values differ by a factor of two among =
drives of all the same make and model. Is one drive slower than the rest =
? Perhaps that drive is on the way out.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56C0A956-F134-4A8D-A8B6-B93DCA045BE4>