Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 27 Apr 2016 19:36:16 +0200
From:      Adam Nowacki <nowakpl@platinum.linux.pl>
To:        freebsd-fs@freebsd.org
Subject:   Re: How to speed up slow zpool scrub?
Message-ID:  <5720F890.3040600@platinum.linux.pl>
In-Reply-To: <5720AAF8.4090900@quip.cz>
References:  <698816653.2698619.1461685653634.JavaMail.yahoo.ref@mail.yahoo.com> <698816653.2698619.1461685653634.JavaMail.yahoo@mail.yahoo.com> <571F9897.2070008@quip.cz> <571FEB34.7040305@andyit.com.au> <56C0A956-F134-4A8D-A8B6-B93DCA045BE4@pk1048.com> <084201d1a03e$d2158fe0$7640afa0$@andyit.com.au> <5720AAF8.4090900@quip.cz>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2016-04-27 14:05, Miroslav Lachman wrote:
> Andy Farkas wrote on 04/27/2016 06:39:
>>> -----Original Message-----
>>> From: PK1048 [mailto:paul@pk1048.com]
>>> Sent: Wednesday, 27 April 2016 12:34 PM
>>> To: Andy Farkas <andyf@andyit.com.au>
>>> Cc: freebsd-fs@freebsd.org
>>> Subject: Re: How to speed up slow zpool scrub?
>>>
>>> ...
>>> Scrubs (and resilver) operations are essentially all random I/O. Those
>>> drives are low end, low performance, desktop drives.
>>
>> Yes, the system is an old low end, low performance desktop. That was
>> my point, that it took 25 hours to scrub 7.52T and not 4 days like the
>> OP is saying.
> 
> Thank you for output of your zpool scrub. It is definitely faster than
> mine.
> 
> To: Paul pk1048
> Mine scrub does not repair anything. Drives are OK (in SMART).
> CPU is about 70%-90% idle during scrub + rsync backup and drives are
> about 60%-70% busy according to iostat:
> 
> root@kiwi ~/# iostat -x -w 10 ada0 ada1 ada2 ada3
>                         extended device statistics
> device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
> ada0      70.1  17.9  1747.0   802.4    0   7.0  24
> ada1      70.1  17.9  1747.1   802.4    0   7.0  25
> ada2      66.9  17.1  1686.4   791.3    4   6.5  23
> ada3      66.9  16.9  1686.3   790.1    2   6.6  23
>                         extended device statistics
> device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
> ada0      93.6  13.0   576.4   244.0    0  21.6  70
> ada1      98.6  12.9   587.2   246.4    2  20.5  71
> ada2      87.9  15.3   566.0   242.4    3  20.5  67
> ada3      84.9  14.5   549.2   237.2    3  20.4  66
>                         extended device statistics
> device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
> ada0      98.7  42.5  1924.7  2536.3    1  26.3  86
> ada1      99.1  45.5  1931.5  2671.5    1  23.8  87
> ada2      94.2  44.9  1840.7  2720.3    0  20.1  76
> ada3      93.6  42.7  1807.9  2607.1    0  18.7  75
>                         extended device statistics
> device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
> ada0     108.2  28.2  1092.6  1316.6    2  17.3  68
> ada1     101.6  26.3  1053.8  1183.4    3  15.5  67
> ada2      98.6  26.0  1000.2  1126.2    2  12.2  57
> ada3     104.0  24.0  1015.8  1080.6    3  14.1  60
>                         extended device statistics
> device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
> ada0     116.0  18.5   821.8   807.8    0  12.9  62
> ada1     117.2  18.5   822.2   807.0    0  13.5  63
> ada2     110.8  20.9   743.0   803.8    0  11.1  58
> ada3     108.2  20.0   688.2   755.0    2  11.3  55
>                         extended device statistics
> device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
> ada0     121.8  16.6   602.1   526.9    3   9.2  52
> ada1     122.2  16.5   606.9   528.5    4   9.8  54
> ada2     117.0  14.6   601.7   524.9    2  11.3  60
> ada3     120.6  13.5   610.1   491.3    0  11.4  61
> 
> I really don't know why it cannot go faster if nothing is loaded for 100%.

1) zpool scrub is single threaded with prefetch,
2) some data blocks do not span all disks (metadata, small files,
compression)
End result is that zfs can't always read from all disks during scrub so
disk utilization is going to be less than 100% even when going at full
speed.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5720F890.3040600>