Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 30 Jan 2013 09:19:52 -0600
From:      Kevin Day <toasty@dragondata.com>
To:        Nikolay Denev <ndenev@gmail.com>
Cc:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: Improving ZFS performance for large directories
Message-ID:  <47975CEB-EA50-4F6C-8C47-6F32312F34C4@dragondata.com>
In-Reply-To: <5267B97C-ED47-4AAB-8415-12D6987E9371@gmail.com>
References:  <19DB8F4A-6788-44F6-9A2C-E01DEA01BED9@dragondata.com> <5267B97C-ED47-4AAB-8415-12D6987E9371@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Jan 30, 2013, at 4:36 AM, Nikolay Denev <ndenev@gmail.com> wrote:
> 
> 
> What are your : vfs.zfs.arc_meta_limit and vfs.zfs.arc_meta_used sysctls?
> Maybe increasing the limit can help?


vfs.zfs.arc_meta_limit: 8199079936
vfs.zfs.arc_meta_used: 13965744408

Full output of zfs-stats:


------------------------------------------------------------------------
ZFS Subsystem Report				Wed Jan 30 15:16:54 2013
------------------------------------------------------------------------

System Information:

	Kernel Version:				901000 (osreldate)
	Hardware Platform:			amd64
	Processor Architecture:			amd64

	ZFS Storage pool Version:		28
	ZFS Filesystem Version:			5

FreeBSD 9.1-RC2 #1: Tue Oct 30 20:37:38 UTC 2012 root
 3:16PM  up 19 days, 19:44, 2 users, load averages: 0.91, 0.80, 0.68

------------------------------------------------------------------------

System Memory:

	12.44%	7.72	GiB Active,	6.04%	3.75	GiB Inact
	77.33%	48.01	GiB Wired,	2.25%	1.40	GiB Cache
	1.94%	1.21	GiB Free,	0.00%	1.21	MiB Gap

	Real Installed:				64.00	GiB
	Real Available:			99.97%	63.98	GiB
	Real Managed:			97.04%	62.08	GiB

	Logical Total:				64.00	GiB
	Logical Used:			90.07%	57.65	GiB
	Logical Free:			9.93%	6.35	GiB

Kernel Memory:					22.62	GiB
	Data:				99.91%	22.60	GiB
	Text:				0.09%	21.27	MiB

Kernel Memory Map:				54.28	GiB
	Size:				34.75%	18.86	GiB
	Free:				65.25%	35.42	GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
	Memory Throttle Count:			0

ARC Misc:
	Deleted:				430.91m
	Recycle Misses:				111.27m
	Mutex Misses:				2.49m
	Evict Skips:				647.25m

ARC Size:				87.63%	26.77	GiB
	Target Size: (Adaptive)		87.64%	26.77	GiB
	Min Size (Hard Limit):		12.50%	3.82	GiB
	Max Size (High Water):		8:1	30.54	GiB

ARC Size Breakdown:
	Recently Used Cache Size:	58.64%	15.70	GiB
	Frequently Used Cache Size:	41.36%	11.07	GiB

ARC Hash Breakdown:
	Elements Max:				2.19m
	Elements Current:		86.15%	1.89m
	Collisions:				344.47m
	Chain Max:				17
	Chains:					552.47k

------------------------------------------------------------------------

ARC Efficiency:					21.94b
	Cache Hit Ratio:		97.00%	21.28b
	Cache Miss Ratio:		3.00%	657.23m
	Actual Hit Ratio:		73.15%	16.05b

	Data Demand Efficiency:		98.94%	1.32b
	Data Prefetch Efficiency:	14.83%	299.44m

	CACHE HITS BY CACHE LIST:
	  Anonymously Used:		23.03%	4.90b
	  Most Recently Used:		6.12%	1.30b
	  Most Frequently Used:		69.29%	14.75b
	  Most Recently Used Ghost:	0.50%	105.94m
	  Most Frequently Used Ghost:	1.07%	226.92m

	CACHE HITS BY DATA TYPE:
	  Demand Data:			6.11%	1.30b
	  Prefetch Data:		0.21%	44.42m
	  Demand Metadata:		69.29%	14.75b
	  Prefetch Metadata:		24.38%	5.19b

	CACHE MISSES BY DATA TYPE:
	  Demand Data:			2.12%	13.90m
	  Prefetch Data:		38.80%	255.02m
	  Demand Metadata:		30.97%	203.56m
	  Prefetch Metadata:		28.11%	184.75m

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:					24.08b
	Hit Ratio:			66.02%	15.90b
	Miss Ratio:			33.98%	8.18b

	Colinear:				8.18b
	  Hit Ratio:			0.01%	560.82k
	  Miss Ratio:			99.99%	8.18b

	Stride:					15.23b
	  Hit Ratio:			99.98%	15.23b
	  Miss Ratio:			0.02%	2.62m

DMU Misc:
	Reclaim:				8.18b
	  Successes:			0.08%	6.31m
	  Failures:			99.92%	8.17b

	Streams:				663.44m
	  +Resets:			0.06%	397.18k
	  -Resets:			99.94%	663.04m
	  Bogus:				0

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
	kern.maxusers                           384
	vm.kmem_size                            66662760448
	vm.kmem_size_scale                      1
	vm.kmem_size_min                        0
	vm.kmem_size_max                        329853485875
	vfs.zfs.l2c_only_size                   0
	vfs.zfs.mfu_ghost_data_lsize            2121007104
	vfs.zfs.mfu_ghost_metadata_lsize        7876605440
	vfs.zfs.mfu_ghost_size                  9997612544
	vfs.zfs.mfu_data_lsize                  10160539648
	vfs.zfs.mfu_metadata_lsize              17161216
	vfs.zfs.mfu_size                        11163991040
	vfs.zfs.mru_ghost_data_lsize            7235079680
	vfs.zfs.mru_ghost_metadata_lsize        11107812352
	vfs.zfs.mru_ghost_size                  18342892032
	vfs.zfs.mru_data_lsize                  4406255616
	vfs.zfs.mru_metadata_lsize              3924364288
	vfs.zfs.mru_size                        8893582336
	vfs.zfs.anon_data_lsize                 0
	vfs.zfs.anon_metadata_lsize             0
	vfs.zfs.anon_size                       999424
	vfs.zfs.l2arc_norw                      1
	vfs.zfs.l2arc_feed_again                1
	vfs.zfs.l2arc_noprefetch                1
	vfs.zfs.l2arc_feed_min_ms               200
	vfs.zfs.l2arc_feed_secs                 1
	vfs.zfs.l2arc_headroom                  2
	vfs.zfs.l2arc_write_boost               8388608
	vfs.zfs.l2arc_write_max                 8388608
	vfs.zfs.arc_meta_limit                  8199079936
	vfs.zfs.arc_meta_used                   14161977912
	vfs.zfs.arc_min                         4099539968
	vfs.zfs.arc_max                         32796319744
	vfs.zfs.dedup.prefetch                  1
	vfs.zfs.mdcomp_disable                  0
	vfs.zfs.write_limit_override            0
	vfs.zfs.write_limit_inflated            206088929280
	vfs.zfs.write_limit_max                 8587038720
	vfs.zfs.write_limit_min                 33554432
	vfs.zfs.write_limit_shift               3
	vfs.zfs.no_write_throttle               0
	vfs.zfs.zfetch.array_rd_sz              1048576
	vfs.zfs.zfetch.block_cap                256
	vfs.zfs.zfetch.min_sec_reap             2
	vfs.zfs.zfetch.max_streams              8
	vfs.zfs.prefetch_disable                0
	vfs.zfs.mg_alloc_failures               12
	vfs.zfs.check_hostid                    1
	vfs.zfs.recover                         0
	vfs.zfs.txg.synctime_ms                 1000
	vfs.zfs.txg.timeout                     5
	vfs.zfs.vdev.cache.bshift               16
	vfs.zfs.vdev.cache.size                 0
	vfs.zfs.vdev.cache.max                  16384
	vfs.zfs.vdev.write_gap_limit            4096
	vfs.zfs.vdev.read_gap_limit             32768
	vfs.zfs.vdev.aggregation_limit          131072
	vfs.zfs.vdev.ramp_rate                  2
	vfs.zfs.vdev.time_shift                 6
	vfs.zfs.vdev.min_pending                4
	vfs.zfs.vdev.max_pending                10
	vfs.zfs.vdev.bio_flush_disable          0
	vfs.zfs.cache_flush_disable             0
	vfs.zfs.zil_replay_disable              0
	vfs.zfs.zio.use_uma                     0
	vfs.zfs.snapshot_list_prefetch          0
	vfs.zfs.version.zpl                     5
	vfs.zfs.version.spa                     28
	vfs.zfs.version.acl                     1
	vfs.zfs.debug                           0
	vfs.zfs.super_owner                     0

------------------------------------------------------------------------




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47975CEB-EA50-4F6C-8C47-6F32312F34C4>