Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 21 Jan 2014 13:28:12 +0200
From:      Andriy Gapon <avg@FreeBSD.org>
To:        freebsd-fs@FreeBSD.org, jlh@FreeBSD.org
Subject:   Re: ARC_SPACE_OTHER exceeds arc_max
Message-ID:  <52DE59CC.5070408@FreeBSD.org>
In-Reply-To: <20131011184206.GA60057@caravan.chchile.org>
References:  <20131011184206.GA60057@caravan.chchile.org>

next in thread | previous in thread | raw e-mail | index | archive | help
on 11/10/2013 21:42 Jeremie Le Hen said the following:
> Hi,
> 
> (Please Cc: me on reply, as I'm not subscribed.)

Then you probably should not have set Mail-Followup-To header to
freebsd-fs@FreeBSD.org.

> On my FreeBSD 9.1 machine, roughly 2/3 of the times the daily scripts
> are run, my ARC size is outgrowing like crazy the vfs.zfs.arc_max being
> set to 536870912 (512 MB).
> 
> The consquence of this is that userland processes are killed (!).  OK
> this box has no swap space and should have but still, this sounds really
> crazy that a filesystem cache is able to "reclaim" memory for running
> processes :).

ARC_SPACE_OTHER is used to account for dnode objects and dbuf objects as opposed
to actual data buffers in ARC cache.  The fact that
ARC_SPACE_OTHER grows beyond limits means that the objects can not be evicted
and thus they are in use.  My guess that this is because you have rather high
vnode limits and it is the ZFS vnodes that keep those objects in memory.
The fact that arc_meta_used significantly exceeds arc_meta_limit supports my
theory as arc_meta_used accounts for data buffers that include those that back
dnode objects and dbuf objects.

My other guess is that you have lots of very small files and/or lots of
directories that are traversed by the daily scripts.

FreeBSD does not have any feedback mechanism from ARC to vnode management code
and can not ask for vnodes on a free list to be reclaimed when metdata limits
are exceeded.
Perhaps we should consider adding such a mechanism, if possible.

But meanwhile please consider tuning your vnode limits so that they are in
agreement with ARC limits.
You may also want to set properties like nosuid / noexec on some of your
filesystems to limit a number of visited files by daily security checks.  Not
sure if this advice is applicable to your configuration.

> I've used top -b every 30 seconds to get an idea of the system's memory,
> logs below. Here is also a snippet of my sysctls related to zfs:
> 
> vfs.zfs.l2arc_norw: 1
> vfs.zfs.l2arc_feed_again: 1
> vfs.zfs.l2arc_noprefetch: 1
> vfs.zfs.l2arc_feed_min_ms: 200
> vfs.zfs.l2arc_feed_secs: 1
> vfs.zfs.l2arc_headroom: 2
> vfs.zfs.l2arc_write_boost: 8388608
> vfs.zfs.l2arc_write_max: 8388608
> vfs.zfs.arc_meta_limit: 134217728
> vfs.zfs.arc_meta_used: 236528568
> vfs.zfs.arc_min: 67108864
> vfs.zfs.arc_max: 536870912
> debug.adaptive_machine_arch: 1
> hw.machine_arch: amd64
> kstat.zfs.misc.arcstats.hits: 77149964
> kstat.zfs.misc.arcstats.misses: 13690743
> kstat.zfs.misc.arcstats.demand_data_hits: 8073317
> kstat.zfs.misc.arcstats.demand_data_misses: 274103
> kstat.zfs.misc.arcstats.demand_metadata_hits: 69076639
> kstat.zfs.misc.arcstats.demand_metadata_misses: 13416617
> kstat.zfs.misc.arcstats.prefetch_data_hits: 0
> kstat.zfs.misc.arcstats.prefetch_data_misses: 0
> kstat.zfs.misc.arcstats.prefetch_metadata_hits: 8
> kstat.zfs.misc.arcstats.prefetch_metadata_misses: 23
> kstat.zfs.misc.arcstats.mru_hits: 33260238
> kstat.zfs.misc.arcstats.mru_ghost_hits: 2830869
> kstat.zfs.misc.arcstats.mfu_hits: 43889719
> kstat.zfs.misc.arcstats.mfu_ghost_hits: 3452884
> kstat.zfs.misc.arcstats.allocated: 14361097
> kstat.zfs.misc.arcstats.deleted: 7860017
> kstat.zfs.misc.arcstats.stolen: 6994054
> kstat.zfs.misc.arcstats.recycle_miss: 7051205
> kstat.zfs.misc.arcstats.mutex_miss: 4479
> kstat.zfs.misc.arcstats.evict_skip: 4052583
> kstat.zfs.misc.arcstats.evict_l2_cached: 0
> kstat.zfs.misc.arcstats.evict_l2_eligible: 129211340800
> kstat.zfs.misc.arcstats.evict_l2_ineligible: 14336
> kstat.zfs.misc.arcstats.hash_elements: 88387
> kstat.zfs.misc.arcstats.hash_elements_max: 133658
> kstat.zfs.misc.arcstats.hash_collisions: 14197499
> kstat.zfs.misc.arcstats.hash_chains: 16044
> kstat.zfs.misc.arcstats.hash_chain_max: 27
> kstat.zfs.misc.arcstats.p: 188161024
> kstat.zfs.misc.arcstats.c: 536870912
> kstat.zfs.misc.arcstats.c_min: 67108864
> kstat.zfs.misc.arcstats.c_max: 536870912
> kstat.zfs.misc.arcstats.size: 534264760
> kstat.zfs.misc.arcstats.hdr_size: 20228688
> kstat.zfs.misc.arcstats.data_size: 406874112
> kstat.zfs.misc.arcstats.other_size: 107161960
> kstat.zfs.misc.arcstats.l2_hits: 0
> kstat.zfs.misc.arcstats.l2_misses: 0
> kstat.zfs.misc.arcstats.l2_feeds: 0
> kstat.zfs.misc.arcstats.l2_rw_clash: 0
> kstat.zfs.misc.arcstats.l2_read_bytes: 0
> kstat.zfs.misc.arcstats.l2_write_bytes: 0
> kstat.zfs.misc.arcstats.l2_writes_sent: 0
> kstat.zfs.misc.arcstats.l2_writes_done: 0
> kstat.zfs.misc.arcstats.l2_writes_error: 0
> kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
> kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
> kstat.zfs.misc.arcstats.l2_evict_reading: 0
> kstat.zfs.misc.arcstats.l2_free_on_write: 0
> kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
> kstat.zfs.misc.arcstats.l2_cksum_bad: 0
> kstat.zfs.misc.arcstats.l2_io_error: 0
> kstat.zfs.misc.arcstats.l2_size: 0
> kstat.zfs.misc.arcstats.l2_hdr_size: 0
> kstat.zfs.misc.arcstats.l2_write_trylock_fail: 0
> kstat.zfs.misc.arcstats.l2_write_passed_headroom: 0
> kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
> kstat.zfs.misc.arcstats.l2_write_in_l2: 0
> kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
> kstat.zfs.misc.arcstats.l2_write_not_cacheable: 5
> kstat.zfs.misc.arcstats.l2_write_full: 0
> kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
> kstat.zfs.misc.arcstats.l2_write_pios: 0
> kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
> kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
> kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0
> kstat.zfs.misc.arcstats.memory_throttle_count: 0
> kstat.zfs.misc.arcstats.duplicate_buffers: 0
> kstat.zfs.misc.arcstats.duplicate_buffers_size: 0
> kstat.zfs.misc.arcstats.duplicate_reads: 0
> 
> 
> 
> 
> 
> 
> 
> 
> ct  8 03:00:11 CEST 2013
> Mem: 97M Active, 112M Inact, 1258M Wired, 552K Cache, 482M Free
> ARC: 310M Total, 42M MFU, 147M MRU, 912K Anon, 20M Header, 101M Other
> 
> Tue Oct  8 03:00:41 CEST 2013
> Mem: 97M Active, 112M Inact, 1258M Wired, 552K Cache, 482M Free
> ARC: 310M Total, 42M MFU, 147M MRU, 912K Anon, 20M Header, 101M Other
> 
> Tue Oct  8 03:01:11 CEST 2013
> Mem: 105M Active, 114M Inact, 1274M Wired, 552K Cache, 457M Free
> ARC: 330M Total, 42M MFU, 159M MRU, 2241K Anon, 20M Header, 107M Other
> 
> Tue Oct  8 03:01:41 CEST 2013
> Mem: 105M Active, 115M Inact, 1241M Wired, 552K Cache, 490M Free
> ARC: 332M Total, 32M MFU, 137M MRU, 912K Anon, 21M Header, 141M Other
> 
> Tue Oct  8 03:02:11 CEST 2013
> Mem: 105M Active, 121M Inact, 1251M Wired, 552K Cache, 473M Free
> ARC: 326M Total, 34M MFU, 145M MRU, 912K Anon, 22M Header, 124M Other
> 
> Tue Oct  8 03:02:42 CEST 2013
> Mem: 105M Active, 121M Inact, 1247M Wired, 552K Cache, 478M Free
> ARC: 350M Total, 32M MFU, 143M MRU, 1265K Anon, 23M Header, 150M Other
> 
> Tue Oct  8 03:03:12 CEST 2013
> Mem: 104M Active, 121M Inact, 1232M Wired, 552K Cache, 493M Free
> ARC: 293M Total, 35M MFU, 126M MRU, 928K Anon, 24M Header, 107M Other
> 
> Tue Oct  8 03:03:42 CEST 2013
> Mem: 104M Active, 119M Inact, 1253M Wired, 552K Cache, 475M Free
> ARC: 367M Total, 37M MFU, 145M MRU, 928K Anon, 24M Header, 160M Other
> 
> Tue Oct  8 03:04:12 CEST 2013
> Mem: 104M Active, 119M Inact, 1284M Wired, 552K Cache, 443M Free
> ARC: 316M Total, 55M MFU, 157M MRU, 912K Anon, 24M Header, 78M Other
> 
> Tue Oct  8 03:04:42 CEST 2013
> Mem: 104M Active, 119M Inact, 1379M Wired, 552K Cache, 348M Free
> ARC: 413M Total, 84M MFU, 223M MRU, 1743K Anon, 24M Header, 81M Other
> 
> Tue Oct  8 03:05:12 CEST 2013
> Mem: 104M Active, 119M Inact, 1451M Wired, 552K Cache, 276M Free
> ARC: 484M Total, 106M MFU, 274M MRU, 928K Anon, 23M Header, 80M Other
> 
> Tue Oct  8 03:05:42 CEST 2013
> Mem: 104M Active, 119M Inact, 1451M Wired, 552K Cache, 276M Free
> ARC: 494M Total, 119M MFU, 261M MRU, 930K Anon, 23M Header, 90M Other
> 
> Tue Oct  8 03:06:12 CEST 2013
> Mem: 104M Active, 119M Inact, 1451M Wired, 552K Cache, 276M Free
> ARC: 501M Total, 151M MFU, 228M MRU, 1226K Anon, 24M Header, 96M Other
> 
> Tue Oct  8 03:06:42 CEST 2013
> Mem: 104M Active, 119M Inact, 1450M Wired, 552K Cache, 277M Free
> ARC: 502M Total, 191M MFU, 188M MRU, 944K Anon, 24M Header, 99M Other
> 
> Tue Oct  8 03:07:12 CEST 2013
> Mem: 104M Active, 119M Inact, 1451M Wired, 552K Cache, 276M Free
> ARC: 507M Total, 226M MFU, 153M MRU, 1094K Anon, 24M Header, 103M Other
> 
> Tue Oct  8 03:07:42 CEST 2013
> Mem: 104M Active, 118M Inact, 1407M Wired, 552K Cache, 321M Free
> ARC: 479M Total, 175M MFU, 161M MRU, 915K Anon, 25M Header, 117M Other
> 
> Tue Oct  8 03:08:13 CEST 2013
> Mem: 94M Active, 116M Inact, 1387M Wired, 552K Cache, 353M Free
> ARC: 506M Total, 90M MFU, 231M MRU, 912K Anon, 26M Header, 158M Other
> 
> Tue Oct  8 03:08:43 CEST 2013
> Mem: 96M Active, 117M Inact, 1351M Wired, 552K Cache, 386M Free
> ARC: 436M Total, 52M MFU, 231M MRU, 1322K Anon, 23M Header, 129M Other
> 
> Tue Oct  8 03:09:13 CEST 2013
> Mem: 95M Active, 117M Inact, 1314M Wired, 552K Cache, 423M Free
> ARC: 440M Total, 73M MFU, 173M MRU, 941K Anon, 23M Header, 170M Other
> 
> Tue Oct  8 03:09:43 CEST 2013
> Mem: 94M Active, 117M Inact, 1327M Wired, 552K Cache, 412M Free
> ARC: 510M Total, 80M MFU, 182M MRU, 912K Anon, 23M Header, 223M Other
> 
> Tue Oct  8 03:10:13 CEST 2013
> Mem: 94M Active, 117M Inact, 1276M Wired, 552K Cache, 463M Free
> ARC: 578M Total, 65M MFU, 150M MRU, 912K Anon, 27M Header, 336M Other
> 
> Tue Oct  8 03:10:43 CEST 2013
> Mem: 95M Active, 117M Inact, 1262M Wired, 552K Cache, 476M Free
> ARC: 585M Total, 56M MFU, 144M MRU, 913K Anon, 26M Header, 358M Other
> 
> Tue Oct  8 03:11:13 CEST 2013
> Mem: 103M Active, 114M Inact, 1269M Wired, 552K Cache, 464M Free
> ARC: 590M Total, 66M MFU, 140M MRU, 912K Anon, 26M Header, 357M Other
> 
> Tue Oct  8 03:11:44 CEST 2013
> Mem: 103M Active, 114M Inact, 1294M Wired, 552K Cache, 439M Free
> ARC: 667M Total, 70M MFU, 155M MRU, 1056K Anon, 27M Header, 415M Other
> 
> Tue Oct  8 03:12:14 CEST 2013
> Mem: 103M Active, 114M Inact, 1389M Wired, 552K Cache, 343M Free
> ARC: 792M Total, 73M MFU, 185M MRU, 1040K Anon, 27M Header, 507M Other
> 
> Tue Oct  8 03:12:44 CEST 2013
> Mem: 94M Active, 119M Inact, 1472M Wired, 552K Cache, 265M Free
> ARC: 891M Total, 82M MFU, 212M MRU, 1056K Anon, 27M Header, 568M Other
> 
> Tue Oct  8 03:13:15 CEST 2013
> Mem: 94M Active, 119M Inact, 1538M Wired, 544K Cache, 199M Free
> ARC: 951M Total, 85M MFU, 220M MRU, 928K Anon, 28M Header, 618M Other
> 
> Tue Oct  8 03:13:45 CEST 2013
> Mem: 136M Active, 22M Inact, 1682M Wired, 46M Cache, 65M Free
> ARC: 1113M Total, 90M MFU, 259M MRU, 912K Anon, 26M Header, 738M Other
> 
> Tue Oct  8 03:14:15 CEST 2013
> Mem: 153M Active, 3936K Inact, 1736M Wired, 36M Cache, 21M Free
> ARC: 1159M Total, 95M MFU, 271M MRU, 1040K Anon, 25M Header, 767M Other
> 
> Tue Oct  8 03:14:46 CEST 2013
> Mem: 62M Active, 15M Inact, 1808M Wired, 35M Cache, 30M Free
> ARC: 1213M Total, 80M MFU, 294M MRU, 819K Anon, 19M Header, 819M Other
> 
> Tue Oct  8 03:15:17 CEST 2013
> Mem: 61M Active, 6488K Inact, 1816M Wired, 33M Cache, 34M Free
> ARC: 1194M Total, 73M MFU, 293M MRU, 1040K Anon, 19M Header, 808M Other
> 
> Tue Oct  8 03:15:47 CEST 2013
> Mem: 75M Active, 2548K Inact, 1808M Wired, 25M Cache, 41M Free
> ARC: 1189M Total, 72M MFU, 292M MRU, 1475K Anon, 19M Header, 804M Other
> 
> Tue Oct  8 03:16:17 CEST 2013
> Mem: 75M Active, 2928K Inact, 1806M Wired, 24M Cache, 43M Free
> ARC: 1183M Total, 72M MFU, 291M MRU, 912K Anon, 18M Header, 801M Other
> 
> Tue Oct  8 03:16:47 CEST 2013
> Mem: 64M Active, 14M Inact, 1805M Wired, 24M Cache, 44M Free
> ARC: 1179M Total, 71M MFU, 290M MRU, 912K Anon, 18M Header, 798M Other
> 
> Tue Oct  8 03:17:17 CEST 2013
> Mem: 21M Active, 57M Inact, 1802M Wired, 24M Cache, 47M Free
> ARC: 1174M Total, 71M MFU, 289M MRU, 912K Anon, 18M Header, 795M Other
> 
> Tue Oct  8 03:17:48 CEST 2013
> Mem: 16M Active, 61M Inact, 1801M Wired, 24M Cache, 49M Free
> ARC: 1170M Total, 70M MFU, 288M MRU, 912K Anon, 18M Header, 792M Other
> 
> Tue Oct  8 03:18:18 CEST 2013
> Mem: 16M Active, 61M Inact, 1800M Wired, 24M Cache, 50M Free
> ARC: 1165M Total, 70M MFU, 287M MRU, 912K Anon, 18M Header, 789M Other
> 
> Tue Oct  8 03:18:48 CEST 2013
> Mem: 17M Active, 62M Inact, 1799M Wired, 23M Cache, 50M Free
> ARC: 1162M Total, 70M MFU, 286M MRU, 912K Anon, 18M Header, 787M Other
> 
> Tue Oct  8 03:19:18 CEST 2013
> Mem: 14M Active, 57M Inact, 1797M Wired, 23M Cache, 60M Free
> ARC: 1157M Total, 69M MFU, 285M MRU, 912K Anon, 18M Header, 784M Other
> 
> Tue Oct  8 03:19:48 CEST 2013
> Mem: 14M Active, 57M Inact, 1796M Wired, 23M Cache, 61M Free
> ARC: 1153M Total, 69M MFU, 284M MRU, 912K Anon, 18M Header, 781M Other
> 
> Tue Oct  8 03:20:18 CEST 2013
> Mem: 15M Active, 56M Inact, 1794M Wired, 22M Cache, 63M Free
> ARC: 1148M Total, 69M MFU, 283M MRU, 912K Anon, 18M Header, 777M Other
> 
> Tue Oct  8 03:20:48 CEST 2013
> Mem: 14M Active, 56M Inact, 1793M Wired, 22M Cache, 65M Free
> ARC: 1145M Total, 69M MFU, 282M MRU, 912K Anon, 18M Header, 775M Other
> 
> 


-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?52DE59CC.5070408>