Date: Mon, 04 Apr 2011 22:52:48 -0400 From: Boris Kochergin <spawk@acm.poly.edu> To: FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org> Subject: Re: Kernel memory leak in 8.2-PRERELEASE? Message-ID: <4D9A8400.1030006@acm.poly.edu> In-Reply-To: <4D9A6BF7.5000106@acm.poly.edu> References: <4D972FF7.6010901@acm.poly.edu> <20110402153315.GP78089@deviant.kiev.zoral.com.ua> <4D974393.80606@acm.poly.edu> <4D9A307F.9070408@acm.poly.edu> <20110404224334.GA64297@icarus.home.lan> <4D9A68AA.6040803@acm.poly.edu> <20110405010148.GA67821@icarus.home.lan> <4D9A6BF7.5000106@acm.poly.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
So, vfs.zfs.arc_max="2048M" in /boot/loader.conf was indeed apparently all that was necessary to bring the situation under control. I remember it being a lot more nightmarish, so it's nice to see that it's improved. Thanks for everyone's advice. Per an earlier request, here is the output of "zfs-stats -a" right now (not when the system is running out of memory, but perhaps still interesting): # zfs-stats -a ------------------------------------------------------------------------ ZFS Subsystem Report Mon Apr 4 22:43:43 2011 ------------------------------------------------------------------------ System Information: Kernel Version: 802502 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 FreeBSD 8.2-STABLE #3: Sat Apr 2 11:48:43 EDT 2011 spawk 10:43PM up 1:32, 3 users, load averages: 0.13, 0.09, 0.07 ------------------------------------------------------------------------ System Memory Statistics: Physical Memory: 8181.32M Kernel Memory: 2134.72M DATA: 99.65% 2127.20M TEXT: 0.35% 7.52M ------------------------------------------------------------------------ ZFS pool information: Storage pool Version (spa): 15 Filesystem Version (zpl): 4 ------------------------------------------------------------------------ ARC Misc: Deleted: 583540 Recycle Misses: 355 Mutex Misses: 11 Evict Skips: 11 ARC Size: Current Size (arcsize): 100.00% 2048.07M Target Size (Adaptive, c): 100.00% 2048.00M Min Size (Hard Limit, c_min): 12.50% 256.00M Max Size (High Water, c_max): ~8:1 2048.00M ARC Size Breakdown: Recently Used Cache Size (p): 93.32% 1911.27M Frequent;y Used Cache Size (arcsize-p): 6.68% 136.80M ARC Hash Breakdown: Elements Max: 38168 Elements Current: 99.18% 37856 Collisions: 127822 Chain Max: 5 Chains: 4567 ARC Eviction Statistics: Evicts Total: 80383607808 Evicts Eligible for L2: 9.95% 8001851392 Evicts Ineligible for L2: 90.05% 72381756416 Evicts Cached to L2: 0 ARC Efficiency: Cache Access Total: 1439376 Cache Hit Ratio: 55.64% 800797 Cache Miss Ratio: 44.36% 638579 Actual Hit Ratio: 51.12% 735809 Data Demand Efficiency: 97.21% Data Prefetch Efficiency: 8.78% CACHE HITS BY CACHE LIST: Anonymously Used: 5.96% 47738 Most Recently Used (mru): 28.97% 232003 Most Frequently Used (mfu): 62.91% 503806 MRU Ghost (mru_ghost): 0.74% 5933 MFU Ghost (mfu_ghost): 1.41% 11317 CACHE HITS BY DATA TYPE: Demand Data: 31.17% 249578 Prefetch Data: 7.47% 59804 Demand Metadata: 60.33% 483145 Prefetch Metadata: 1.03% 8270 CACHE MISSES BY DATA TYPE: Demand Data: 1.12% 7162 Prefetch Data: 97.31% 621373 Demand Metadata: 0.91% 5805 Prefetch Metadata: 0.66% 4239 ------------------------------------------------------------------------ VDEV Cache Summary: Access Total: 32382 Hits Ratio: 21.62% 7001 Miss Ratio: 78.38% 25381 Delegations: 11361 ------------------------------------------------------------------------ File-Level Prefetch Stats (DMU): DMU Efficiency: Access Total: 640507 Hit Ratio: 92.40% 591801 Miss Ratio: 7.60% 48706 Colinear Access Total: 48706 Colinear Hit Ratio: 0.20% 99 Colinear Miss Ratio: 99.80% 48607 Stride Access Total: 533456 Stride Hit Ratio: 99.97% 533296 Stride Miss Ratio: 0.03% 160 DMU misc: Reclaim successes: 8857 Reclaim failures: 39750 Stream resets: 126 Stream noresets: 58504 Bogus streams: 0 ------------------------------------------------------------------------ ZFS Tunable (sysctl): kern.maxusers=384 vfs.zfs.l2c_only_size=0 vfs.zfs.mfu_ghost_data_lsize=1956972544 vfs.zfs.mfu_ghost_metadata_lsize=12218880 vfs.zfs.mfu_ghost_size=1969191424 vfs.zfs.mfu_data_lsize=103428096 vfs.zfs.mfu_metadata_lsize=17053184 vfs.zfs.mfu_size=124921344 vfs.zfs.mru_ghost_data_lsize=133824512 vfs.zfs.mru_ghost_metadata_lsize=40669184 vfs.zfs.mru_ghost_size=174493696 vfs.zfs.mru_data_lsize=1984430080 vfs.zfs.mru_metadata_lsize=14490112 vfs.zfs.mru_size=2005246464 vfs.zfs.anon_data_lsize=0 vfs.zfs.anon_metadata_lsize=0 vfs.zfs.anon_size=0 vfs.zfs.l2arc_norw=1 vfs.zfs.l2arc_feed_again=1 vfs.zfs.l2arc_noprefetch=0 vfs.zfs.l2arc_feed_min_ms=200 vfs.zfs.l2arc_feed_secs=1 vfs.zfs.l2arc_headroom=2 vfs.zfs.l2arc_write_boost=8388608 vfs.zfs.l2arc_write_max=8388608 vfs.zfs.arc_meta_limit=536870912 vfs.zfs.arc_meta_used=59699288 vfs.zfs.mdcomp_disable=0 vfs.zfs.arc_min=268435456 vfs.zfs.arc_max=2147483648 vfs.zfs.zfetch.array_rd_sz=1048576 vfs.zfs.zfetch.block_cap=256 vfs.zfs.zfetch.min_sec_reap=2 vfs.zfs.zfetch.max_streams=8 vfs.zfs.prefetch_disable=0 vfs.zfs.check_hostid=1 vfs.zfs.recover=0 vfs.zfs.txg.write_limit_override=0 vfs.zfs.txg.synctime=5 vfs.zfs.txg.timeout=30 vfs.zfs.scrub_limit=10 vfs.zfs.vdev.cache.bshift=16 vfs.zfs.vdev.cache.size=10485760 vfs.zfs.vdev.cache.max=16384 vfs.zfs.vdev.aggregation_limit=131072 vfs.zfs.vdev.ramp_rate=2 vfs.zfs.vdev.time_shift=6 vfs.zfs.vdev.min_pending=4 vfs.zfs.vdev.max_pending=10 vfs.zfs.cache_flush_disable=0 vfs.zfs.zil_disable=0 vfs.zfs.zio.use_uma=0 vfs.zfs.version.zpl=4 vfs.zfs.version.spa=15 vfs.zfs.version.dmu_backup_stream=1 vfs.zfs.version.dmu_backup_header=2 vfs.zfs.version.acl=1 vfs.zfs.debug=0 vfs.zfs.super_owner=0 vm.kmem_size=8294764544 vm.kmem_size_scale=1 vm.kmem_size_min=0 vm.kmem_size_max=329853485875 ------------------------------------------------------------------------ -Boris
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4D9A8400.1030006>