Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 9 May 2019 13:02:35 +0200
From:      Dimitry Andric <dim@FreeBSD.org>
To:        Miroslav Lachman <000.fbsd@quip.cz>
Cc:        "Patrick M. Hausen" <hausen@punkt.de>, Michelle Sullivan <michelle@sorbs.net>, FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: ZFS...
Message-ID:  <E980141F-48D9-4870-8FE1-9A5610F12826@FreeBSD.org>
In-Reply-To: <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz>
References:  <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <CAGMYy3tYqvrKgk2c==WTwrH03uTN1xQifPRNxXccMsRE1spaRA@mail.gmail.com> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <d0118f7e-7cfc-8bf1-308c-823bce088039@denninger.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <A535026E-F9F6-4BBA-8287-87EFD02CF207@sorbs.net> <a82bfabe-a8c3-fd9a-55ec-52530d4eafff@denninger.net> <a1b78a63-0ef1-af51-4e33-a9a97a257c8b@sorbs.net> <CAMPTd_A7RYJ12pFyY4TzbXct82kWfr1hcEkSpDg7bjP25xjJGA@mail.gmail.com> <d91cf5@sorbs.net> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <E68600B3-F856-4909-AB6E-BDFCD8AAAB43@punkt.de> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz>

next in thread | previous in thread | raw e-mail | index | archive | help

[-- Attachment #1 --]
On 9 May 2019, at 10:32, Miroslav Lachman <000.fbsd@quip.cz> wrote:
> 
> Patrick M. Hausen wrote on 2019/05/09 09:46:
>> Hi all,
>>> Am 09.05.2019 um 00:55 schrieb Michelle Sullivan <michelle@sorbs.net>:
>>> No, one disk in the 16 disk zRAID2 ...  previously unseen but it could be the errors have occurred in the last 6 weeks... everytime I reboot it started resilvering, gets to 761M resilvered and then stops.
>> 16 disks in *one* RAIDZ2 vdev? That might be the cause of your insanely
>> long scrubs. In general it is not recommended though I cannot find the
>> source for that information quickly just now.
> 
> Extremely slow scrub is an issue even on 4 disks RAIDZ. I already posted about it in the past. This scrub is running from Sunday 3AM.
> Time to go is big lie. Is was "19hXXm" 12 hour ago.
> 
>  pool: tank0
> state: ONLINE
>  scan: scrub in progress since Sun May  5 03:01:48 2019
>        10.8T scanned out of 12.7T at 30.4M/s, 18h39m to go
>        0 repaired, 84.72% done
> config:
> 
>        NAME                STATE     READ WRITE CKSUM
>        tank0               ONLINE       0     0     0
>          raidz1-0          ONLINE       0     0     0
>            gpt/disk0tank0  ONLINE       0     0     0
>            gpt/disk1tank0  ONLINE       0     0     0
>            gpt/disk2tank0  ONLINE       0     0     0
>            gpt/disk3tank0  ONLINE       0     0     0
> 
> Disks are OK, monitored by smartmontools. There is nothing odd, just the long long scrubs. This machine was started with 4x 1TB (now 4x 4TB) and scrub was slow with 1TB disks too. This machine - HP ML110 G8) was my first machine with ZFS. If I remember it well it was FreeBSD 7.0, now running 11.2. Scrub was / is always about one week. (I tried some sysctl tuning without much gain)

Unfortunately https://svnweb.freebsd.org/changeset/base/339034, which
greatly speeds up scrubs and resilvers, was not in 11.2 (since it was
cut at r334458).

If you could update to a more recent snapshot, or try the upcoming 11.3
prereleases, you will hopefully see much shorter scrub times.

-Dimitry


[-- Attachment #2 --]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.2

iF0EARECAB0WIQR6tGLSzjX8bUI5T82wXqMKLiCWowUCXNQIywAKCRCwXqMKLiCW
o6PJAJ46s0gYN0kphqx0InDDAuwcTB7V3QCg3z576q235LH8tByPQE4fhWUMVNY=
=Sokp
-----END PGP SIGNATURE-----

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E980141F-48D9-4870-8FE1-9A5610F12826>