From owner-freebsd-fs@FreeBSD.ORG Mon Apr 6 06:53:56 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB8FFD32 for ; Mon, 6 Apr 2015 06:53:56 +0000 (UTC) Received: from ipmail05.adl6.internode.on.net (ipmail05.adl6.internode.on.net [150.101.137.143]) by mx1.freebsd.org (Postfix) with ESMTP id 5229AC25 for ; Mon, 6 Apr 2015 06:53:55 +0000 (UTC) Received: from ppp118-210-45-229.lns20.adl2.internode.on.net (HELO leader.local) ([118.210.45.229]) by ipmail05.adl6.internode.on.net with ESMTP; 06 Apr 2015 16:18:45 +0930 Message-ID: <55222C4C.4090901@ShaneWare.Biz> Date: Mon, 06 Apr 2015 16:18:44 +0930 From: Shane Ambler User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Karl Denninger , freebsd-fs@freebsd.org Subject: Re: Swap usage with ZFS References: <880944c05bb859ca0fc97b2d8606fe29@thebighonker.lerctr.org> <5521475B.4010703@denninger.net> In-Reply-To: <5521475B.4010703@denninger.net> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Apr 2015 06:53:56 -0000 On 06/04/2015 00:01, Karl Denninger wrote: > On 4/5/2015 09:23, Larry Rosenman wrote: >> I have a -HEAD (11-CURRENT) box that has 64G of memory, but very >> little load. >> >> The ZFS ARC grows to eat most of it, but I see around 200M in use in >> SWAP. This was under control >> in 10.x. >> >> I'm wondering what information y'all need to help diagnose why. >> >> borg.lerctr.org /home/ler $ uname -aKU >> FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #32 r281050: >> Fri Apr 3 16:41:13 CDT 2015 >> root@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER amd64 1100067 1100067 >> borg.lerctr.org /home/ler $ >> >> >> borg.lerctr.org /home/ler $ top >> last pid: 26313; load averages: 6.92, 6.79, 6.83 up 1+16:26:05 >> 09:23:13 >> 80 processes: 4 running, 76 sleeping >> CPU: 0.0% user, 46.9% nice, 0.3% system, 0.0% interrupt, 52.8% idle >> Mem: 281M Active, 539M Inact, 59G Wired, 18M Cache, 8128K Buf, 1241M Free >> ARC: 55G Total, 42G MFU, 9766M MRU, 1044K Anon, 568M Header, 3437M Other >> Swap: 128G Total, 205M Used, 128G Free >> >> > This is consistent with how the VM system is expected to behave absent > the patches I developed for 10-STABLE (and continue to maintain for same.) > > In short what is going on is that ZFS (absent those patches) will allow > ARC to grow until the pager not only wakes up and starts scavenging > cache pages but actively starts evicting working set to the page file. > It will then pare down the ARC but at that point you have paged out > working set process memory. > > I argue this is flat-out wrong as discarding ARC instead *possibly* > implicates one disk I/O (to retrieve said data) if the cached data is > later needed but a page-out of RSS *always* implicates one disk I/O (to > page out said data) and *possibly* implicates two disk operations (if > the RSS pages are later executed.) > > Therefore it is /*never*/ the correct decision to favor paging out > resident processes rather than discarding disk cache. > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 > > I do not know if this will apply against -HEAD. > Sounds like it's related to one of my issues. On a machine with 8G getting 7G wired is a problem. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194654 Is there any chance that having vfs.zfs.arc_free_target smaller than vm.v_free_target plays a part in this? or are those free targets unrelated? -- FreeBSD - the place to B...Storing Data Shane Ambler