From owner-freebsd-stable@freebsd.org Mon Aug 13 19:48:36 2018 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B644F107B7D3 for ; Mon, 13 Aug 2018 19:48:36 +0000 (UTC) (envelope-from arcade@b1t.name) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 42B58895E2 for ; Mon, 13 Aug 2018 19:48:36 +0000 (UTC) (envelope-from arcade@b1t.name) Received: by mailman.ysv.freebsd.org (Postfix) id 03328107B7D2; Mon, 13 Aug 2018 19:48:36 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E5E9A107B7D1 for ; Mon, 13 Aug 2018 19:48:35 +0000 (UTC) (envelope-from arcade@b1t.name) Received: from limbo.b1t.name (limbo.b1t.name [78.25.32.206]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 65C3C895E0 for ; Mon, 13 Aug 2018 19:48:35 +0000 (UTC) (envelope-from arcade@b1t.name) Received: from [172.29.1.147] (probe.42.lan [172.29.1.147]) by limbo.b1t.name (Postfix) with ESMTPSA id 8960A88; Mon, 13 Aug 2018 22:48:23 +0300 (EEST) Subject: Re: All the memory eaten away by ZFS 'solaris' malloc - on 11.1-R amd64 To: Mark Martinec , stable@FreeBSD.org References: <1a039af7758679ba1085934b4fb81b57@ijs.si> From: Volodymyr Kostyrko Message-ID: <8af780b9-1d8c-8279-dacd-eb1c8d199dec@b1t.name> Date: Mon, 13 Aug 2018 22:48:22 +0300 User-Agent: Mozilla/5.0 (X11; DragonFly x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.0 MIME-Version: 1.0 In-Reply-To: <1a039af7758679ba1085934b4fb81b57@ijs.si> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=b1t.name; s=dkim; t=1534189705; bh=6fgKoxMVG63nohiSm2ylFB+dkuEkp0TvKYMTdQtvToo=; h=Subject:To:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=L39e5Bark5Om8cuJ/mhQziluL9cwRaiI+v+ky0aGAb7IZcAGNIbZgAoL5v8IHyhnOZ+OtFtoLkksuVIE8nnHzu6Q/cBzSVnqql+7HWLXucHBGNg+CXETQQzbH0G0shupNyXn5oqyKoI5gVzKEILfc4x4iIPaX5n2G3xmvMJFouw= X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Aug 2018 19:48:36 -0000 23.07.18 18:12, Mark Martinec wrote: > After upgrading an older AMD host from FreeBSD 10.3 to 11.1-RELEASE-p11 > (amd64), ZFS is gradually eating up all memory, so that it crashes every > few days when the memory is completely exhausted (after swapping heavily > for a couple of hours). I've been in the same situation. ZFS, only pool, no ZFS errors. I think the problem is rather between swapping and ZFS ARC. This host has different load, sometimes it needs more active memory, somtimes less... This means that active zone can expand and shrink like +-2G os mem (I have 16Gb installed there). The problem is, when huge task is idle it doesn't use much active memory and other activity is pushing it's memory to the swap. When active runs low and ARC runs >50% of memory it becomes very hard to make ARC give some memory back. My host even was broght to the point when it couldn't get tasks back into memory from swap, because while some pages were restored from swap the time passes by and the other pages are instead stored to swap due to zome ARC activity. Finally active zone shrinks so bad that the host becomes unresponsive. Like 6 month ago I tried tweaking kernel and swap to make things go other way. Currently I have `vm.swap_idle_enabled=1` in /etc/loader.conf and looks like this solves my problem. The other interesting things to look at are `vfs.zfs.arc_free_target`, `vfs.zfs.arc_shrink_shift`, `vfs.zfs.arc_grow_retry`. Or you can take another route and plain limit current ARC size with `vfs.zfs.arc_max`. Hope that helps. -- Sphinx of black quartz judge my vow.