Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Aug 2016 23:50:20 +0200
From:      Rainer Duffner <rainer@ultra-secure.de>
To:        Fabian Keil <freebsd-listen@fabiankeil.de>, FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: zfs receive stalls whole system
Message-ID:  <DEF9BB57-BAF1-42F5-8927-F09AEB7E4740@ultra-secure.de>
In-Reply-To: <20160526124822.374b2dea@fabiankeil.de>
References:  <0C2233A9-C64A-4773-ABA5-C0BCA0D037F0@ultra-secure.de> <20160517102757.135c1468@fabiankeil.de> <c090ab7bbff2fffe2a49284f9be70183@ultra-secure.de> <20160517123627.699e2aa5@fabiankeil.de> <20160526124822.374b2dea@fabiankeil.de>

next in thread | previous in thread | raw e-mail | index | archive | help

> Am 26.05.2016 um 12:48 schrieb Fabian Keil =
<freebsd-listen@fabiankeil.de>:
>=20
>>=20
>> It can cause deadlocks and poor performance when paging.
>>=20
>> This was recently fixed in ElectroBSD and I intend to submit
>> the patch in a couple of days after a bit more stress testing.
>=20
> Done: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D209759 =
<https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D209759>;

I=E2=80=99ve now been able to implement this patch.
It doesn=E2=80=99t seem to do any harm but doesn=E2=80=99t fix the =
problem, either.


(sorry for the long post)

pool: datapool
state: ONLINE
scan: none requested
config:

NAME                      STATE     READ WRITE CKSUM
datapool                  ONLINE       0     0     0
  raidz2-0                ONLINE       0     0     0
    gpt/S0M1ESLL_C1S03    ONLINE       0     0     0
    gpt/S0M1F8V0_C1S04    ONLINE       0     0     0
    gpt/S0M1EQPR_C1S05    ONLINE       0     0     0
    gpt/S0M19J9D_C1S06    ONLINE       0     0     0
    gpt/S0M1ES7R_C1S07    ONLINE       0     0     0
    gpt/S0M1DXJR_C1S08    ONLINE       0     0     0
  raidz2-1                ONLINE       0     0     0
    gpt/S0M1EQHL_C2S01    ONLINE       0     0     0
    gpt/S0M1EQSL_C2S02    ONLINE       0     0     0
    gpt/S0M1F7CG_C2S03    ONLINE       0     0     0
    gpt/S0M1F2B1_C2S04    ONLINE       0     0     0
    gpt/S0M1ER7Y_C2S05    ONLINE       0     0     0
    gpt/S0M1F9B0_C2S06    ONLINE       0     0     0
  raidz2-2                ONLINE       0     0     0
    gpt/S3L29R3L_EC1_S01  ONLINE       0     0     0
    gpt/S3L29XFQ_EC1_S02  ONLINE       0     0     0
    gpt/S3L29QTK_EC1_S03  ONLINE       0     0     0
    gpt/S3L28ZFA_EC1_S04  ONLINE       0     0     0
    gpt/S3L29PG9_EC1_S05  ONLINE       0     0     0
    gpt/S3L29TAA_EC1_S06  ONLINE       0     0     0
  raidz2-3                ONLINE       0     0     0
    gpt/S3L29RHR_EC1_S07  ONLINE       0     0     0
    gpt/S3L29VQT_EC1_S08  ONLINE       0     0     0
    gpt/S3L2A7WM_EC1_S09  ONLINE       0     0     0
    gpt/S3L29GXN_EC1_S10  ONLINE       0     0     0
    gpt/S3L29TPT_EC1_S11  ONLINE       0     0     0
    gpt/S3L2A4EJ_EC1_S12  ONLINE       0     0     0

errors: No known data errors

pool: zroot
state: ONLINE
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
zroot       ONLINE       0     0     0
  mirror-0  ONLINE       0     0     0
    da8p3   ONLINE       0     0     0
    da9p3   ONLINE       0     0     0

errors: No known data errors

Machine 1 creates hourly, daily and weekly snapshots (with zfSnap) and =
sends them hourly to Machine 2.

/usr/local/sbin/zxfer -dF -o sharenfs=3D"-maproot=3D1003 -network =
10.10.91.224 -mask 255.255.255.240" -T root@10.168.91.231 -R =
datapool/nfs datapool/backup=20

The network is Gbit.

The filesystems aren=E2=80=99t that big (IMO):

NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  =
ALTROOT
datapool  19.5T  7.08T  12.4T         -    13%    36%  1.00x  ONLINE  -
zroot      556G  7.78G   548G         -     1%     1%  1.00x  ONLINE  -

Nor are the snapshots really that large:

NAME                                                               USED  =
AVAIL  REFER  MOUNTPOINT
datapool/nfs/bla1-archives@weekly-2016-07-23_04.44.27--2w          43.3M =
     -  49.5G  -
datapool/nfs/bla1-archives@daily-2016-07-28_03.35.25--7d           42.0M =
     -  49.9G  -
datapool/nfs/bla1-archives@daily-2016-07-29_03.33.40--7d           42.0M =
     -  49.9G  -
datapool/nfs/bla1-archives@daily-2016-07-30_03.22.18--7d               0 =
     -  49.9G  -
datapool/nfs/bla1-archives@weekly-2016-07-30_04.15.01--2w              0 =
     -  49.9G  -
datapool/nfs/bla1-archives@daily-2016-07-31_03.14.47--7d           42.0M =
     -  49.9G  -
datapool/nfs/bla1-archives@daily-2016-08-01_05.03.36--7d           42.0M =
     -  49.9G  -
datapool/nfs/bla1-archives@daily-2016-08-02_05.02.39--7d           42.0M =
     -  49.9G  -
datapool/nfs/bla1-archives@daily-2016-08-03_03.57.46--7d           42.2M =
     -  49.9G  -
datapool/nfs/bla1-archives@hourly-2016-08-03_12.04.00--12h             0 =
     -  19.8G  -
datapool/nfs/bla1-archives@hourly-2016-08-03_13.04.00--12h             0 =
     -  19.8G  -
datapool/nfs/bla1-archives@hourly-2016-08-03_14.04.00--12h          192K =
     -   575K  -
datapool/nfs/bla1-archives@hourly-2016-08-03_15.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_16.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_17.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_18.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_19.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_20.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_21.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_22.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-03_23.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-archives@hourly-2016-08-04_00.04.00--12h             0 =
     -  12.1M  -
datapool/nfs/bla1-documents@weekly-2016-07-23_04.44.27--2w         6.02G =
     -  4.51T  -
datapool/nfs/bla1-documents@daily-2016-07-28_03.35.25--7d          5.85G =
     -  4.54T  -
datapool/nfs/bla1-documents@daily-2016-07-29_03.33.40--7d          5.82G =
     -  4.55T  -
datapool/nfs/bla1-documents@daily-2016-07-30_03.22.18--7d              0 =
     -  4.56T  -
datapool/nfs/bla1-documents@weekly-2016-07-30_04.15.01--2w             0 =
     -  4.56T  -
datapool/nfs/bla1-documents@daily-2016-07-31_03.14.47--7d          5.80G =
     -  4.56T  -
datapool/nfs/bla1-documents@daily-2016-08-01_05.03.36--7d          5.80G =
     -  4.56T  -
datapool/nfs/bla1-documents@daily-2016-08-02_05.02.39--7d          5.81G =
     -  4.56T  -
datapool/nfs/bla1-documents@daily-2016-08-03_03.57.46--7d          70.6M =
     -  4.56T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_12.04.00--12h        6.85M =
     -  4.57T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_13.04.00--12h        3.42M =
     -  4.57T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_14.04.00--12h        9.88M =
     -  4.57T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_15.04.00--12h        12.6M =
     -  4.57T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_16.04.00--12h        12.4M =
     -  4.57T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_17.04.00--12h        11.5M =
     -  4.58T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_18.04.00--12h        4.64M =
     -  4.58T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_19.04.00--12h         464K =
     -  4.58T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_20.04.00--12h         352K =
     -  4.58T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_21.04.00--12h         384K =
     -  4.58T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_22.04.00--12h        79.9K =
     -  4.58T  -
datapool/nfs/bla1-documents@hourly-2016-08-03_23.04.00--12h            0 =
     -  4.58T  -
datapool/nfs/bla1-documents@hourly-2016-08-04_00.04.00--12h            0 =
     -  4.58T  -
datapool/nfs/bla1-project_layouts@weekly-2016-07-23_04.44.27--2w    176K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@daily-2016-07-28_03.35.25--7d     144K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@daily-2016-07-29_03.33.40--7d     144K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@daily-2016-07-30_03.22.18--7d        0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@weekly-2016-07-30_04.15.01--2w       0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@daily-2016-07-31_03.14.47--7d     128K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@daily-2016-08-01_05.03.36--7d     128K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@daily-2016-08-02_05.02.39--7d     176K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@daily-2016-08-03_03.57.46--7d     176K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_12.04.00--12h   144K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_13.04.00--12h   112K =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_14.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_15.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_16.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_17.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_18.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_19.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_20.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_21.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_22.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-03_23.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-project_layouts@hourly-2016-08-04_00.04.00--12h      0 =
     -  1.85M  -
datapool/nfs/bla1-wkhtml@weekly-2016-07-23_04.44.27--2w             128K =
     -   208K  -
datapool/nfs/bla1-wkhtml@daily-2016-07-28_03.35.25--7d              128K =
     -   208K  -
datapool/nfs/bla1-wkhtml@daily-2016-07-29_03.33.40--7d              128K =
     -   208K  -
datapool/nfs/bla1-wkhtml@daily-2016-07-30_03.22.18--7d                 0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@weekly-2016-07-30_04.15.01--2w                0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@daily-2016-07-31_03.14.47--7d              128K =
     -   208K  -
datapool/nfs/bla1-wkhtml@daily-2016-08-01_05.03.36--7d              128K =
     -   208K  -
datapool/nfs/bla1-wkhtml@daily-2016-08-02_05.02.39--7d              128K =
     -   208K  -
datapool/nfs/bla1-wkhtml@daily-2016-08-03_03.57.46--7d                 0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_12.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_13.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_14.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_15.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_16.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_17.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_18.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_19.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_20.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_21.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_22.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-03_23.04.00--12h               0 =
     -   208K  -
datapool/nfs/bla1-wkhtml@hourly-2016-08-04_00.04.00--12h               0 =
     -   208K  -



I even went as far as =E2=80=9Ecross-flashing=E2=80=9C LSI=E2=80=99s =
20.00.xy firmware to the HP cards, which came with very old firmware =
(which really only works with the older MS-DOS versions of the =
software).
HP refuses to provide updated firmware.

However, zfs receive stalls the system even if there is virtually no =
data to be transferred.
The stalls takes longer at 03 and 04 AM, which is when I assume the =
filesystems that are deleted on the master are also deleted on this zfs =
receive target.


...
-----
Wed Aug 3 01:17:00 CEST 2016
Starting sync...
Wed Aug 3 01:17:17 CEST 2016
-----
Wed Aug 3 02:17:00 CEST 2016
Starting sync...
Wed Aug 3 02:17:17 CEST 2016
-----
Wed Aug 3 03:17:00 CEST 2016
Starting sync...
Wed Aug 3 03:23:16 CEST 2016
-----
Wed Aug 3 04:17:00 CEST 2016
Starting sync...
Wed Aug 3 04:20:12 CEST 2016
-----
Wed Aug 3 05:17:00 CEST 2016
Starting sync...
Wed Aug 3 05:17:22 CEST 2016

=E2=80=A6
Thu Aug 4 01:17:00 CEST 2016
Starting sync...
Thu Aug 4 01:17:24 CEST 2016
-----
Thu Aug 4 02:17:00 CEST 2016
Starting sync...
Thu Aug 4 02:17:20 CEST 2016
-----
Thu Aug 4 03:17:00 CEST 2016
Starting sync...
Thu Aug 4 03:23:14 CEST 2016
-----
Thu Aug 4 04:17:00 CEST 2016
Starting sync...
Thu Aug 4 04:19:53 CEST 2016
-----
Thu Aug 4 05:17:00 CEST 2016
Starting sync...
Thu Aug 4 05:17:29 CEST 2016




I had this problem with 9.x with the old HP PA4x0 controller (and maybe =
with 10.0) - but it went away with 10.1
I switched controllers when I had to attach an external disk-shelf to =
the servers because the customer needed more space. Also, it=E2=80=99s a =
real PITA exchanging broken disks when you have no HPACUCLI for =
FreeBSD...

The first 12 disks in the pool are 600GB SAS disks, the other 12 disks =
are 900 GB SAS in an external HP enclosure.
I have no L2ARC, no separate log-device.

The system is really completely frozen. Besides just being a =
warm-standby device, this server also acts as a Read-Only MySQL Slave =
that the application uses.
When it hangs, the whole application hangs and Netbackup stops backing =
up.
The zfs sender has no problems.

The vdevs/pools were created with vfs.zfs.min_auto_ashift=3D12


What else is there to look for?


------------------------------------------------------------------------
ZFS Subsystem Report				Thu Aug  4 00:09:56 2016
------------------------------------------------------------------------

System Information:

	Kernel Version:				1003000 (osreldate)
	Hardware Platform:			amd64
	Processor Architecture:			amd64

	ZFS Storage pool Version:		5000
	ZFS Filesystem Version:			5

FreeBSD 10.3-RELEASE #0 r297264: Fri Mar 25 02:10:02 UTC 2016 root
12:09AM  up 1 day,  6:59, 1 user, load averages: 0.01, 0.07, 0.07

------------------------------------------------------------------------

System Memory:

	0.32%	615.41	MiB Active,	11.44%	21.39	GiB Inact
	50.72%	94.86	GiB Wired,	0.00%	252.00	KiB Cache
	37.52%	70.18	GiB Free,	0.00%	64.00	KiB Gap

	Real Installed:				192.00	GiB
	Real Available:			99.97%	191.94	GiB
	Real Managed:			97.44%	187.03	GiB

	Logical Total:				192.00	GiB
	Logical Used:			52.31%	100.43	GiB
	Logical Free:			47.69%	91.57	GiB

Kernel Memory:					1.45	GiB
	Data:				98.17%	1.43	GiB
	Text:				1.83%	27.14	MiB

Kernel Memory Map:				187.03	GiB
	Size:				32.73%	61.22	GiB
	Free:				67.27%	125.81	GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
	Memory Throttle Count:			0

ARC Misc:
	Deleted:				15
	Recycle Misses:				0
	Mutex Misses:				0
	Evict Skips:				436

ARC Size:				30.04%	55.89	GiB
	Target Size: (Adaptive)		100.00%	186.03	GiB
	Min Size (Hard Limit):		12.50%	23.25	GiB
	Max Size (High Water):		8:1	186.03	GiB

ARC Size Breakdown:
	Recently Used Cache Size:	50.00%	93.01	GiB
	Frequently Used Cache Size:	50.00%	93.01	GiB

ARC Hash Breakdown:
	Elements Max:				1.65m
	Elements Current:		99.94%	1.65m
	Collisions:				358.52k
	Chain Max:				3
	Chains:					37.77k

------------------------------------------------------------------------

ARC Efficiency:					87.79m
	Cache Hit Ratio:		64.95%	57.01m
	Cache Miss Ratio:		35.05%	30.77m
	Actual Hit Ratio:		60.48%	53.09m

	Data Demand Efficiency:		96.42%	21.65m
	Data Prefetch Efficiency:	58.89%	4.98m

	CACHE HITS BY CACHE LIST:
	  Anonymously Used:		6.88%	3.92m
	  Most Recently Used:		30.90%	17.62m
	  Most Frequently Used:		62.22%	35.48m
	  Most Recently Used Ghost:	0.00%	0
	  Most Frequently Used Ghost:	0.00%	0

	CACHE HITS BY DATA TYPE:
	  Demand Data:			36.61%	20.87m
	  Prefetch Data:		5.14%	2.93m
	  Demand Metadata:		56.47%	32.20m
	  Prefetch Metadata:		1.78%	1.02m

	CACHE MISSES BY DATA TYPE:
	  Demand Data:			2.52%	775.75k
	  Prefetch Data:		6.65%	2.05m
	  Demand Metadata:		89.40%	27.51m
	  Prefetch Metadata:		1.42%	438.12k

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:					180.34m
	Hit Ratio:			0.31%	554.89k
	Miss Ratio:			99.69%	179.78m

	Colinear:				0
	  Hit Ratio:			100.00%	0
	  Miss Ratio:			100.00%	0

	Stride:					0
	  Hit Ratio:			100.00%	0
	  Miss Ratio:			100.00%	0

DMU Misc:
	Reclaim:				0
	  Successes:			100.00%	0
	  Failures:			100.00%	0

	Streams:				0
	  +Resets:			100.00%	0
	  -Resets:			100.00%	0
	  Bogus:				0

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
	kern.maxusers                           12620
	vm.kmem_size                            200818708480
	vm.kmem_size_scale                      1
	vm.kmem_size_min                        0
	vm.kmem_size_max                        1319413950874
	vfs.zfs.trim.max_interval               1
	vfs.zfs.trim.timeout                    30
	vfs.zfs.trim.txg_delay                  32
	vfs.zfs.trim.enabled                    1
	vfs.zfs.vol.unmap_enabled               1
	vfs.zfs.vol.mode                        1
	vfs.zfs.version.zpl                     5
	vfs.zfs.version.spa                     5000
	vfs.zfs.version.acl                     1
	vfs.zfs.version.ioctl                   5
	vfs.zfs.debug                           0
	vfs.zfs.super_owner                     0
	vfs.zfs.sync_pass_rewrite               2
	vfs.zfs.sync_pass_dont_compress         5
	vfs.zfs.sync_pass_deferred_free         2
	vfs.zfs.zio.exclude_metadata            0
	vfs.zfs.zio.use_uma                     1
	vfs.zfs.cache_flush_disable             0
	vfs.zfs.zil_replay_disable              0
	vfs.zfs.min_auto_ashift                 12
	vfs.zfs.max_auto_ashift                 13
	vfs.zfs.vdev.trim_max_pending           10000
	vfs.zfs.vdev.bio_delete_disable         0
	vfs.zfs.vdev.bio_flush_disable          0
	vfs.zfs.vdev.write_gap_limit            4096
	vfs.zfs.vdev.read_gap_limit             32768
	vfs.zfs.vdev.aggregation_limit          131072
	vfs.zfs.vdev.trim_max_active            64
	vfs.zfs.vdev.trim_min_active            1
	vfs.zfs.vdev.scrub_max_active           2
	vfs.zfs.vdev.scrub_min_active           1
	vfs.zfs.vdev.async_write_max_active     10
	vfs.zfs.vdev.async_write_min_active     1
	vfs.zfs.vdev.async_read_max_active      3
	vfs.zfs.vdev.async_read_min_active      1
	vfs.zfs.vdev.sync_write_max_active      10
	vfs.zfs.vdev.sync_write_min_active      10
	vfs.zfs.vdev.sync_read_max_active       10
	vfs.zfs.vdev.sync_read_min_active       10
	vfs.zfs.vdev.max_active                 1000
	vfs.zfs.vdev.async_write_active_max_dirty_percent60
	vfs.zfs.vdev.async_write_active_min_dirty_percent30
	vfs.zfs.vdev.mirror.non_rotating_seek_inc1
	vfs.zfs.vdev.mirror.non_rotating_inc    0
	vfs.zfs.vdev.mirror.rotating_seek_offset1048576
	vfs.zfs.vdev.mirror.rotating_seek_inc   5
	vfs.zfs.vdev.mirror.rotating_inc        0
	vfs.zfs.vdev.trim_on_init               1
	vfs.zfs.vdev.cache.bshift               16
	vfs.zfs.vdev.cache.size                 0
	vfs.zfs.vdev.cache.max                  16384
	vfs.zfs.vdev.metaslabs_per_vdev         200
	vfs.zfs.txg.timeout                     5
	vfs.zfs.space_map_blksz                 4096
	vfs.zfs.spa_slop_shift                  5
	vfs.zfs.spa_asize_inflation             24
	vfs.zfs.deadman_enabled                 1
	vfs.zfs.deadman_checktime_ms            5000
	vfs.zfs.deadman_synctime_ms             1000000
	vfs.zfs.recover                         0
	vfs.zfs.spa_load_verify_data            1
	vfs.zfs.spa_load_verify_metadata        1
	vfs.zfs.spa_load_verify_maxinflight     10000
	vfs.zfs.check_hostid                    1
	vfs.zfs.mg_fragmentation_threshold      85
	vfs.zfs.mg_noalloc_threshold            0
	vfs.zfs.condense_pct                    200
	vfs.zfs.metaslab.bias_enabled           1
	vfs.zfs.metaslab.lba_weighting_enabled  1
	vfs.zfs.metaslab.fragmentation_factor_enabled1
	vfs.zfs.metaslab.preload_enabled        1
	vfs.zfs.metaslab.preload_limit          3
	vfs.zfs.metaslab.unload_delay           8
	vfs.zfs.metaslab.load_pct               50
	vfs.zfs.metaslab.min_alloc_size         33554432
	vfs.zfs.metaslab.df_free_pct            4
	vfs.zfs.metaslab.df_alloc_threshold     131072
	vfs.zfs.metaslab.debug_unload           0
	vfs.zfs.metaslab.debug_load             0
	vfs.zfs.metaslab.fragmentation_threshold70
	vfs.zfs.metaslab.gang_bang              16777217
	vfs.zfs.free_bpobj_enabled              1
	vfs.zfs.free_max_blocks                 -1
	vfs.zfs.no_scrub_prefetch               0
	vfs.zfs.no_scrub_io                     0
	vfs.zfs.resilver_min_time_ms            3000
	vfs.zfs.free_min_time_ms                1000
	vfs.zfs.scan_min_time_ms                1000
	vfs.zfs.scan_idle                       50
	vfs.zfs.scrub_delay                     4
	vfs.zfs.resilver_delay                  2
	vfs.zfs.top_maxinflight                 32
	vfs.zfs.zfetch.array_rd_sz              1048576
	vfs.zfs.zfetch.max_distance             8388608
	vfs.zfs.zfetch.min_sec_reap             2
	vfs.zfs.zfetch.max_streams              8
	vfs.zfs.prefetch_disable                0
	vfs.zfs.delay_scale                     500000
	vfs.zfs.delay_min_dirty_percent         60
	vfs.zfs.dirty_data_sync                 67108864
	vfs.zfs.dirty_data_max_percent          10
	vfs.zfs.dirty_data_max_max              4294967296
	vfs.zfs.dirty_data_max                  4294967296
	vfs.zfs.max_recordsize                  1048576
	vfs.zfs.mdcomp_disable                  0
	vfs.zfs.nopwrite_enabled                1
	vfs.zfs.dedup.prefetch                  1
	vfs.zfs.l2c_only_size                   0
	vfs.zfs.mfu_ghost_data_lsize            0
	vfs.zfs.mfu_ghost_metadata_lsize        0
	vfs.zfs.mfu_ghost_size                  0
	vfs.zfs.mfu_data_lsize                  40921600
	vfs.zfs.mfu_metadata_lsize              2360084992
	vfs.zfs.mfu_size                        4470225920
	vfs.zfs.mru_ghost_data_lsize            0
	vfs.zfs.mru_ghost_metadata_lsize        0
	vfs.zfs.mru_ghost_size                  0
	vfs.zfs.mru_data_lsize                  49482637824
	vfs.zfs.mru_metadata_lsize              4404856320
	vfs.zfs.mru_size                        53920903168
	vfs.zfs.anon_data_lsize                 0
	vfs.zfs.anon_metadata_lsize             0
	vfs.zfs.anon_size                       106496
	vfs.zfs.l2arc_norw                      1
	vfs.zfs.l2arc_feed_again                1
	vfs.zfs.l2arc_noprefetch                1
	vfs.zfs.l2arc_feed_min_ms               200
	vfs.zfs.l2arc_feed_secs                 1
	vfs.zfs.l2arc_headroom                  2
	vfs.zfs.l2arc_write_boost               8388608
	vfs.zfs.l2arc_write_max                 8388608
	vfs.zfs.arc_meta_limit                  49936241664
	vfs.zfs.arc_free_target                 339922
	vfs.zfs.arc_shrink_shift                7
	vfs.zfs.arc_average_blocksize           8192
	vfs.zfs.arc_min                         24968120832
	vfs.zfs.arc_max                         199744966656

=
=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=
=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=
=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=
=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=
=80=94=E2=80=94=E2=80=94






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?DEF9BB57-BAF1-42F5-8927-F09AEB7E4740>