Date: Tue, 07 Feb 2012 22:03:39 +0600 From: "Eugene M. Zheganin" <emz@norma.perm.ru> To: freebsd-stable <freebsd-stable@freebsd.org> Subject: Re: zfs arc and amount of wired memory Message-ID: <4F314B5B.100@norma.perm.ru> In-Reply-To: <4F314892.50806@FreeBSD.org> References: <4F30E284.8080905@norma.perm.ru> <4F310115.3070507@FreeBSD.org> <4F310C5A.6070400@norma.perm.ru> <4F310E75.7090301@FreeBSD.org> <4F3144A9.2000505@norma.perm.ru> <4F314892.50806@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi. On 07.02.2012 21:51, Andriy Gapon wrote: > > I am not sure that these conclusions are correct. Wired is wired, it's not free. > BTW, are you reluctant to share the full zfs-stats -a output? You don't have to > place it inline, you can upload it somewhere and provide a link. > Well... nothing secret in it (in case someone will be interested too and so it stays in maillist archive): ===Cut=== [emz@taiga:~]> zfs-stats -a ------------------------------------------------------------------------ ZFS Subsystem Report Tue Feb 7 22:01:09 2012 ------------------------------------------------------------------------ System Information: Kernel Version: 900044 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 ZFS Storage pool Version: 28 ZFS Filesystem Version: 5 FreeBSD 9.0-RELEASE #1: Mon Jan 23 13:36:16 YEKT 2012 emz 22:01 up 11 days, 2:44, 4 users, load averages: 0,30 0,26 0,32 ------------------------------------------------------------------------ System Memory: 4.30% 169.09 MiB Active, 1.04% 40.95 MiB Inact 90.80% 3.48 GiB Wired, 2.17% 85.25 MiB Cache 0.69% 27.30 MiB Free, 0.99% 39.08 MiB Gap Real Installed: 4.00 GiB Real Available: 99.61% 3.98 GiB Real Managed: 96.31% 3.84 GiB Logical Total: 4.00 GiB Logical Used: 96.25% 3.85 GiB Logical Free: 3.75% 153.50 MiB Kernel Memory: 397.53 MiB Data: 96.18% 382.33 MiB Text: 3.82% 15.20 MiB Kernel Memory Map: 2.69 GiB Size: 9.93% 273.20 MiB Free: 90.07% 2.42 GiB ------------------------------------------------------------------------ ARC Summary: (THROTTLED) Memory Throttle Count: 3.20k ARC Misc: Deleted: 10.83m Recycle Misses: 1.55m Mutex Misses: 7.80k Evict Skips: 7.80k ARC Size: 12.50% 363.20 MiB Target Size: (Adaptive) 12.50% 363.18 MiB Min Size (Hard Limit): 12.50% 363.18 MiB Max Size (High Water): 8:1 2.84 GiB ARC Size Breakdown: Recently Used Cache Size: 56.17% 204.02 MiB Frequently Used Cache Size: 43.83% 159.18 MiB ARC Hash Breakdown: Elements Max: 191.41k Elements Current: 32.79% 62.76k Collisions: 28.06m Chain Max: 17 Chains: 12.77k ------------------------------------------------------------------------ ARC Efficiency: 179.54m Cache Hit Ratio: 95.07% 170.68m Cache Miss Ratio: 4.93% 8.86m Actual Hit Ratio: 95.07% 170.68m Data Demand Efficiency: 94.72% 152.53m Data Prefetch Efficiency: 0.00% 20 CACHE HITS BY CACHE LIST: Most Recently Used: 30.50% 52.05m Most Frequently Used: 69.50% 118.63m Most Recently Used Ghost: 0.30% 517.41k Most Frequently Used Ghost: 1.02% 1.74m CACHE HITS BY DATA TYPE: Demand Data: 84.65% 144.48m Prefetch Data: 0.00% 0 Demand Metadata: 15.35% 26.20m Prefetch Metadata: 0.00% 52 CACHE MISSES BY DATA TYPE: Demand Data: 90.88% 8.05m Prefetch Data: 0.00% 20 Demand Metadata: 9.12% 807.98k Prefetch Metadata: 0.00% 172 ------------------------------------------------------------------------ L2ARC is disabled ------------------------------------------------------------------------ ------------------------------------------------------------------------ VDEV cache is disabled ------------------------------------------------------------------------ ZFS Tunables (sysctl): kern.maxusers 384 vm.kmem_size 4120326144 vm.kmem_size_scale 1 vm.kmem_size_min 0 vm.kmem_size_max 329853485875 vfs.zfs.l2c_only_size 0 vfs.zfs.mfu_ghost_data_lsize 114792960 vfs.zfs.mfu_ghost_metadata_lsize 69912064 vfs.zfs.mfu_ghost_size 184705024 vfs.zfs.mfu_data_lsize 26686464 vfs.zfs.mfu_metadata_lsize 13492736 vfs.zfs.mfu_size 45798912 vfs.zfs.mru_ghost_data_lsize 22662656 vfs.zfs.mru_ghost_metadata_lsize 142631424 vfs.zfs.mru_ghost_size 165294080 vfs.zfs.mru_data_lsize 149837312 vfs.zfs.mru_metadata_lsize 6439424 vfs.zfs.mru_size 215066624 vfs.zfs.anon_data_lsize 0 vfs.zfs.anon_metadata_lsize 0 vfs.zfs.anon_size 1421824 vfs.zfs.l2arc_norw 1 vfs.zfs.l2arc_feed_again 1 vfs.zfs.l2arc_noprefetch 1 vfs.zfs.l2arc_feed_min_ms 200 vfs.zfs.l2arc_feed_secs 1 vfs.zfs.l2arc_headroom 2 vfs.zfs.l2arc_write_boost 8388608 vfs.zfs.l2arc_write_max 8388608 vfs.zfs.arc_meta_limit 761646080 vfs.zfs.arc_meta_used 202913864 vfs.zfs.arc_min 380823040 vfs.zfs.arc_max 3046584320 vfs.zfs.dedup.prefetch 1 vfs.zfs.mdcomp_disable 0 vfs.zfs.write_limit_override 0 vfs.zfs.write_limit_inflated 12834803712 vfs.zfs.write_limit_max 534783488 vfs.zfs.write_limit_min 33554432 vfs.zfs.write_limit_shift 3 vfs.zfs.no_write_throttle 0 vfs.zfs.zfetch.array_rd_sz 1048576 vfs.zfs.zfetch.block_cap 256 vfs.zfs.zfetch.min_sec_reap 2 vfs.zfs.zfetch.max_streams 8 vfs.zfs.prefetch_disable 1 vfs.zfs.mg_alloc_failures 8 vfs.zfs.check_hostid 1 vfs.zfs.recover 0 vfs.zfs.txg.synctime_ms 1000 vfs.zfs.txg.timeout 5 vfs.zfs.scrub_limit 10 vfs.zfs.vdev.cache.bshift 16 vfs.zfs.vdev.cache.size 0 vfs.zfs.vdev.cache.max 16384 vfs.zfs.vdev.write_gap_limit 4096 vfs.zfs.vdev.read_gap_limit 32768 vfs.zfs.vdev.aggregation_limit 131072 vfs.zfs.vdev.ramp_rate 2 vfs.zfs.vdev.time_shift 6 vfs.zfs.vdev.min_pending 4 vfs.zfs.vdev.max_pending 10 vfs.zfs.vdev.bio_flush_disable 0 vfs.zfs.cache_flush_disable 0 vfs.zfs.zil_replay_disable 0 vfs.zfs.zio.use_uma 0 vfs.zfs.version.zpl 5 vfs.zfs.version.spa 28 vfs.zfs.version.acl 1 vfs.zfs.debug 0 vfs.zfs.super_owner 0 ------------------------------------------------------------------------ [emz@taiga:~]> ===Cut=== Thanks. Eugene.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F314B5B.100>