Date: Thu, 13 May 2010 00:12:24 GMT From: Edwin Amsler <EdwinGuy@GMail.com> To: freebsd-gnats-submit@FreeBSD.org Subject: i386/146528: Severe memory leak in ZFS on i386 Message-ID: <201005130012.o4D0COUw097456@www.freebsd.org> Resent-Message-ID: <201005130020.o4D0K1dd024388@freefall.freebsd.org>
next in thread | raw e-mail | index | archive | help
>Number: 146528 >Category: i386 >Synopsis: Severe memory leak in ZFS on i386 >Confidential: no >Severity: serious >Priority: medium >Responsible: freebsd-i386 >State: open >Quarter: >Keywords: >Date-Required: >Class: sw-bug >Submitter-Id: current-users >Arrival-Date: Thu May 13 00:20:01 UTC 2010 >Closed-Date: >Last-Modified: >Originator: Edwin Amsler >Release: 8.0-RELEASE-p2 >Organization: Prime Focus VFX >Environment: FreeBSD Vault.enet 8.0-RELEASE-p2 FreeBSD 8.0-RELEASE-p2 #0: Tue Jan 5 16:02:27 UTC 2010 root@i386-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC i386 >Description: When doing high-throughput file writes on ZFS, user memory steadily plummets towards zero until there is a kernel crash. I'm unsure if this can also happen over a long period of time, because I don't tend to transfer a lot of data to this machine, it's only a problem when I need to move or copy files between directories. It happens only when writing to the pool, as I can do a cat on all files (# find . -exec cat {} > /dev/null ";") and the system doesn't go down. This has been a problem on 7.2 as well as the 8.0 release and it doesn't matter how much RAM or what my arc size configurations are. ZFS literally loses track of the memory used since the ARC size from sysctl -a shows it using only ~113MB. I'll dump debug and config info in this ticket. I have an AMD Athlon 1800+ or so with 2GB of RAM and 1TB(x2) configured as mirror. This problem manifests on any ZFS filesystem with any properties. If needed I can post a video of this happening. ------------------------------------------- relevant sysctl -a output: $ sysctl -a | grep zfs vfs.zfs.arc_meta_limit: 33554432 vfs.zfs.arc_meta_used: 25407772 vfs.zfs.mdcomp_disable: 0 vfs.zfs.arc_min: 16777216 vfs.zfs.arc_max: 134217728 vfs.zfs.zfetch.array_rd_sz: 1048576 vfs.zfs.zfetch.block_cap: 256 vfs.zfs.zfetch.min_sec_reap: 2 vfs.zfs.zfetch.max_streams: 8 vfs.zfs.prefetch_disable: 0 vfs.zfs.recover: 0 vfs.zfs.txg.synctime: 5 vfs.zfs.txg.timeout: 30 vfs.zfs.scrub_limit: 10 vfs.zfs.vdev.cache.bshift: 16 vfs.zfs.vdev.cache.size: 67108864 vfs.zfs.vdev.cache.max: 16384 vfs.zfs.vdev.aggregation_limit: 131072 vfs.zfs.vdev.ramp_rate: 2 vfs.zfs.vdev.time_shift: 6 vfs.zfs.vdev.min_pending: 4 vfs.zfs.vdev.max_pending: 35 vfs.zfs.cache_flush_disable: 0 vfs.zfs.zil_disable: 0 vfs.zfs.version.zpl: 3 vfs.zfs.version.vdev_boot: 1 vfs.zfs.version.spa: 13 vfs.zfs.version.dmu_backup_stream: 1 vfs.zfs.version.dmu_backup_header: 2 vfs.zfs.version.acl: 1 vfs.zfs.debug: 0 vfs.zfs.super_owner: 0 kstat.zfs.misc.arcstats.hits: 632739 kstat.zfs.misc.arcstats.misses: 113083 kstat.zfs.misc.arcstats.demand_data_hits: 491681 kstat.zfs.misc.arcstats.demand_data_misses: 3297 kstat.zfs.misc.arcstats.demand_metadata_hits: 114734 kstat.zfs.misc.arcstats.demand_metadata_misses: 2758 kstat.zfs.misc.arcstats.prefetch_data_hits: 25630 kstat.zfs.misc.arcstats.prefetch_data_misses: 105505 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 694 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1523 kstat.zfs.misc.arcstats.mru_hits: 457817 kstat.zfs.misc.arcstats.mru_ghost_hits: 19849 kstat.zfs.misc.arcstats.mfu_hits: 148638 kstat.zfs.misc.arcstats.mfu_ghost_hits: 139 kstat.zfs.misc.arcstats.deleted: 173038 kstat.zfs.misc.arcstats.recycle_miss: 4923 kstat.zfs.misc.arcstats.mutex_miss: 391 kstat.zfs.misc.arcstats.evict_skip: 2573460 kstat.zfs.misc.arcstats.hash_elements: 6836 kstat.zfs.misc.arcstats.hash_elements_max: 22730 kstat.zfs.misc.arcstats.hash_collisions: 33151 kstat.zfs.misc.arcstats.hash_chains: 655 kstat.zfs.misc.arcstats.hash_chain_max: 5 kstat.zfs.misc.arcstats.p: 134189056 kstat.zfs.misc.arcstats.c: 134217728 kstat.zfs.misc.arcstats.c_min: 16777216 kstat.zfs.misc.arcstats.c_max: 134217728 kstat.zfs.misc.arcstats.size: 113525020 kstat.zfs.misc.arcstats.hdr_size: 929696 kstat.zfs.misc.arcstats.l2_hits: 0 kstat.zfs.misc.arcstats.l2_misses: 0 kstat.zfs.misc.arcstats.l2_feeds: 0 kstat.zfs.misc.arcstats.l2_rw_clash: 0 kstat.zfs.misc.arcstats.l2_writes_sent: 0 kstat.zfs.misc.arcstats.l2_writes_done: 0 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0 kstat.zfs.misc.arcstats.l2_evict_reading: 0 kstat.zfs.misc.arcstats.l2_free_on_write: 0 kstat.zfs.misc.arcstats.l2_abort_lowmem: 0 kstat.zfs.misc.arcstats.l2_cksum_bad: 0 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 0 kstat.zfs.misc.arcstats.l2_hdr_size: 0 kstat.zfs.misc.arcstats.memory_throttle_count: 0 kstat.zfs.misc.vdev_cache_stats.delegations: 2542 kstat.zfs.misc.vdev_cache_stats.hits: 1700 kstat.zfs.misc.vdev_cache_stats.misses: 2024 ------------------------------------------- Relevant sections of loader.conf: vesa_load="YES" zfs_load="YES" vfs.root.mountfrom="zfs:tank/root" # Note that this problem happened before booting zfs as root vm.kmem_size="512M" vm.kmem_size_max="512M" vfs.zfs.arc_max="128M" vfs.zfs.vdev.cache.size="64M" vfs.zfs.prefetch_disable=0 # Have NOT recompiled the kernel for more memory, but I doubt it matters as the system uses more than 1500MB without crashing. It only dies when free memory reaches zero. >How-To-Repeat: Install fresh FreeBSD 8, make a very large file (4GB should do). Copy said file 100 times and watch free memory disappear with top. It took < 5 minutes to use up all 2GB of memory >Fix: >Release-Note: >Audit-Trail: >Unformatted:
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201005130012.o4D0COUw097456>