Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 26 Apr 2016 17:08:12 +0200
From:      Miroslav Lachman <000.fbsd@quip.cz>
To:        Jeremy Faulkner <gldisater@gmail.com>, freebsd-fs@freebsd.org
Subject:   Re: How to speed up slow zpool scrub?
Message-ID:  <571F845C.5060902@quip.cz>
In-Reply-To: <571F82B5.3010807@gmail.com>
References:  <571F62AD.6080005@quip.cz> <571F687D.8040103@internetx.com> <571F6EA4.90800@quip.cz> <571F82B5.3010807@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Jeremy Faulkner wrote on 04/26/2016 17:01:
> zfs get all tank0

I set checksum=fletcher4 and compression=lz4 (+ atime & exec to Off), 
anything else is in default state.

There are 19 filesystems on tank0 and each have about 5 snapshots.

I don't know how long scrub runs on some others system. If it is limited 
by CPU, or disk IOps... but for me 3 - 4 days are really long.


# zfs get all tank0
NAME   PROPERTY              VALUE                  SOURCE
tank0  type                  filesystem             -
tank0  creation              Thu Jul 23  1:37 2015  -
tank0  used                  7.85T                  -
tank0  available             2.26T                  -
tank0  referenced            140K                   -
tank0  compressratio         1.86x                  -
tank0  mounted               no                     -
tank0  quota                 none                   default
tank0  reservation           none                   default
tank0  recordsize            128K                   default
tank0  mountpoint            none                   local
tank0  sharenfs              off                    default
tank0  checksum              fletcher4              local
tank0  compression           lz4                    local
tank0  atime                 off                    local
tank0  devices               on                     default
tank0  exec                  off                    local
tank0  setuid                on                     default
tank0  readonly              off                    default
tank0  jailed                off                    default
tank0  snapdir               hidden                 default
tank0  aclmode               discard                default
tank0  aclinherit            restricted             default
tank0  canmount              on                     default
tank0  xattr                 on                     default
tank0  copies                1                      default
tank0  version               5                      -
tank0  utf8only              off                    -
tank0  normalization         none                   -
tank0  casesensitivity       sensitive              -
tank0  vscan                 off                    default
tank0  nbmand                off                    default
tank0  sharesmb              off                    default
tank0  refquota              none                   default
tank0  refreservation        none                   default
tank0  primarycache          all                    default
tank0  secondarycache        all                    default
tank0  usedbysnapshots       0                      -
tank0  usedbydataset         140K                   -
tank0  usedbychildren        7.85T                  -
tank0  usedbyrefreservation  0                      -
tank0  logbias               latency                default
tank0  dedup                 off                    default
tank0  mlslabel                                     -
tank0  sync                  standard               default
tank0  refcompressratio      1.00x                  -
tank0  written               140K                   -
tank0  logicalused           13.3T                  -
tank0  logicalreferenced     9.50K                  -
tank0  volmode               default                default
tank0  filesystem_limit      none                   default
tank0  snapshot_limit        none                   default
tank0  filesystem_count      none                   default
tank0  snapshot_count        none                   default
tank0  redundant_metadata    all                    default


> On 2016-04-26 9:35 AM, Miroslav Lachman wrote:
>> InterNetX - Juergen Gotteswinter wrote on 04/26/2016 15:09:
>>> to speed up the scrub itself you can try
>>>
>>> sysctl -w vfs.zfs.scrub_delay = 4 (default, 0 means higher prio)
>>
>> I will try it in the idle times
>>
>>> but be careful as this can cause a serious performance impact, the value
>>> can be changed on the fly
>>>
>>> your pool is raidz, mirror ? dedup is hopefully disabled?
>>
>> I forgot to mention it. Disks are partitioned to four partitions:
>>
>> # gpart show -l ada0
>> =>        34  7814037101  ada0  GPT  (3.6T)
>>            34           6        - free -  (3.0K)
>>            40        1024     1  boot0  (512K)
>>          1064    10485760     2  swap0  (5.0G)
>>      10486824    31457280     3  disk0sys  (15G)
>>      41944104  7769948160     4  disk0tank0  (3.6T)
>>    7811892264     2144871        - free -  (1.0G)
>>
>> diskXsys partitions are used for base system pool which is 4-way mirror
>>
>> diskXtank0 partitions are used for data storage as RAIDZ
>>
>> # zpool list
>> NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH
>> ALTROOT
>> sys    14.9G  11.0G  3.92G         -    79%    73%  1.00x  ONLINE  -
>> tank0  14.4T  10.8T  3.56T         -    19%    75%  1.00x  ONLINE  -
>>
>>
>> # zpool status -v
>>    pool: sys
>>   state: ONLINE
>>    scan: scrub repaired 0 in 1h2m with 0 errors on Sun Apr 24 04:03:54
>> 2016
>> config:
>>
>>          NAME              STATE     READ WRITE CKSUM
>>          sys               ONLINE       0     0     0
>>            mirror-0        ONLINE       0     0     0
>>              gpt/disk0sys  ONLINE       0     0     0
>>              gpt/disk1sys  ONLINE       0     0     0
>>              gpt/disk2sys  ONLINE       0     0     0
>>              gpt/disk3sys  ONLINE       0     0     0
>>
>> errors: No known data errors
>>
>>    pool: tank0
>>   state: ONLINE
>>    scan: scrub in progress since Sun Apr 24 03:01:35 2016
>>          7.63T scanned out of 10.6T at 36.7M/s, 23h32m to go
>>          0 repaired, 71.98% done
>> config:
>>
>>          NAME                STATE     READ WRITE CKSUM
>>          tank0               ONLINE       0     0     0
>>            raidz1-0          ONLINE       0     0     0
>>              gpt/disk0tank0  ONLINE       0     0     0
>>              gpt/disk1tank0  ONLINE       0     0     0
>>              gpt/disk2tank0  ONLINE       0     0     0
>>              gpt/disk3tank0  ONLINE       0     0     0
>>
>> errors: No known data errors
>>
>>
>> # zdb | grep ashift
>>              ashift: 12
>>              ashift: 12
>>
>>
>> Thank you for your informations.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?571F845C.5060902>