From owner-freebsd-fs@FreeBSD.ORG Mon Jan 12 20:20:13 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E73623BB for ; Mon, 12 Jan 2015 20:20:13 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A25AA7E9 for ; Mon, 12 Jan 2015 20:20:13 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1YAlT0-0005Bh-O7 for freebsd-fs@freebsd.org; Mon, 12 Jan 2015 21:20:03 +0100 Received: from dynamic34-29.dynamic.dal.ca ([129.173.34.203]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 12 Jan 2015 21:20:02 +0100 Received: from jrm by dynamic34-29.dynamic.dal.ca with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 12 Jan 2015 21:20:02 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Joseph Mingrone Subject: memory exhaustion on 10.1 AMD64 ZFS storage system Date: Mon, 12 Jan 2015 15:56:21 -0400 Lines: 225 Message-ID: <868uh7ydqy.fsf@gly.ftfl.ca> Mime-Version: 1.0 Content-Type: text/plain X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: dynamic34-29.dynamic.dal.ca User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (berkeley-unix) Cancel-Lock: sha1:1yhTf+mDn+WDk/Rr7m4fY+WRvG0= X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Jan 2015 20:20:14 -0000 Hello, We've had this storage system running 9.x without problems. After upgrading to 10.1 we've seen "out of swap space" messages in the logs. Dec 13 04:29:12 storage2 kernel: pid 723 (rpc.statd), uid 0, was killed: out of swap space ... Jan 11 23:23:51 storage2 kernel: pid 642 (mountd), uid 0, was killed: out of swap space What's the best way to determine if this is a ZFS problem? I've read in the 10.1 release notes that vfs.zfs.zio.use_uma has been re-enabled. Has this caused anyone problems with 10.1? Below is information about the server. Joseph # cat /boot/loader.conf zfs_load=YES vfs.root.mountfrom="zfs:zroot" vfs.zfs.arc_max=24G # zfs-stats -F ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:52:21 2015 ------------------------------------------------------------------------ System Information: Kernel Version: 1001000 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014 root 3:52PM up 30 mins, 1 user, load averages: 0.14, 0.15, 0.14 ------------------------------------------------------------------------ # zfs-stats -M ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:52:56 2015 ------------------------------------------------------------------------ System Memory Statistics: Physical Memory: 32706.64M Kernel Memory: 164.14M DATA: 84.30% 138.38M TEXT: 15.70% 25.76M ------------------------------------------------------------------------ # zfs-stats -p ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:53:20 2015 ------------------------------------------------------------------------ ZFS pool information: Storage pool Version (spa): 5000 Filesystem Version (zpl): 5 ------------------------------------------------------------------------ # zfs-stats -A ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:53:43 2015 ------------------------------------------------------------------------ ARC Misc: Deleted: 20 Recycle Misses: 0 Mutex Misses: 0 Evict Skips: 0 ARC Size: Current Size (arcsize): 0.17% 40.87M Target Size (Adaptive, c): 100.00% 24576.00M Min Size (Hard Limit, c_min): 12.50% 3072.00M Max Size (High Water, c_max): ~8:1 24576.00M ARC Size Breakdown: Recently Used Cache Size (p): 50.00% 12288.00M Freq. Used Cache Size (c-p): 50.00% 12288.00M ARC Hash Breakdown: Elements Max: 1583 Elements Current: 100.00% 1583 Collisions: 0 Chain Max: 0 Chains: 0 ARC Eviction Statistics: Evicts Total: 172032 Evicts Eligible for L2: 97.62% 167936 Evicts Ineligible for L2: 2.38% 4096 Evicts Cached to L2: 0 ARC Efficiency Cache Access Total: 44696 Cache Hit Ratio: 95.38% 42632 Cache Miss Ratio: 4.62% 2064 Actual Hit Ratio: 85.21% 38084 Data Demand Efficiency: 97.50% Data Prefetch Efficiency: 8.51% CACHE HITS BY CACHE LIST: Anonymously Used: 10.67% 4548 Most Recently Used (mru): 39.98% 17044 Most Frequently Used (mfu): 49.35% 21040 MRU Ghost (mru_ghost): 0.00% 0 MFU Ghost (mfu_ghost): 0.00% 0 CACHE HITS BY DATA TYPE: Demand Data: 48.37% 20619 Prefetch Data: 0.01% 4 Demand Metadata: 40.97% 17465 Prefetch Metadata: 10.66% 4544 CACHE MISSES BY DATA TYPE: Demand Data: 25.63% 529 Prefetch Data: 2.08% 43 Demand Metadata: 52.18% 1077 Prefetch Metadata: 20.11% 415 ------------------------------------------------------------------------ # zpool list NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT tank 24.5T 11.1T 13.4T 14% - 45% 1.00x ONLINE - zroot 55.5G 6.11G 49.4G 5% - 11% 1.00x ONLINE - # zpool get "all" tank NAME PROPERTY VALUE SOURCE tank size 24.5T - tank capacity 45% - tank altroot - default tank health ONLINE - tank guid 8322714406813719098 default tank version - default tank bootfs - default tank delegation on default tank autoreplace off default tank cachefile - default tank failmode wait default tank listsnapshots off default tank autoexpand off default tank dedupditto 0 default tank dedupratio 1.00x - tank free 13.4T - tank allocated 11.1T - tank readonly off - tank comment - default tank expandsize 0 - tank freeing 0 default tank fragmentation 14% - tank leaked 0 default tank feature@async_destroy enabled local tank feature@empty_bpobj enabled local tank feature@lz4_compress active local tank feature@multi_vdev_crash_dump enabled local tank feature@spacemap_histogram active local tank feature@enabled_txg active local tank feature@hole_birth active local tank feature@extensible_dataset enabled local tank feature@embedded_data active local tank feature@bookmarks enabled local tank feature@filesystem_limits enabled local # zdb -C tank MOS Configuration: version: 5000 name: 'tank' state: 0 txg: 12614760 pool_guid: 8322714406813719098 hostid: 1722087693 hostname: 'storage2.mathstat.dal.ca' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 8322714406813719098 children[0]: type: 'raidz' id: 0 guid: 5865699514822950384 nparity: 3 metaslab_array: 31 metaslab_shift: 37 ashift: 12 asize: 27005292380160 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 6285638336980483158 path: '/dev/label/storage_disk0' phys_path: '/dev/label/storage_disk0' whole_disk: 1 DTL: 106 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 9541693314532360771 path: '/dev/label/storage_disk1' phys_path: '/dev/label/storage_disk1' whole_disk: 1 DTL: 105 create_txg: 4 children[2]: type: 'disk' create_txg: 4 [63/2837] children[0]: type: 'disk' id: 0 guid: 310723121207304329 path: '/dev/gpt/disk0' phys_path: '/dev/gpt/disk0' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 16696203411283061195 path: '/dev/gpt/disk1' phys_path: '/dev/gpt/disk1' whole_disk: 1 create_txg: 4