From owner-freebsd-arm@freebsd.org Thu Aug 9 15:36:56 2018 Return-Path: Delivered-To: freebsd-arm@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C9ABF106A55C for ; Thu, 9 Aug 2018 15:36:56 +0000 (UTC) (envelope-from fbsd@www.zefox.net) Received: from www.zefox.net (www.zefox.net [50.1.20.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "www.zefox.org", Issuer "www.zefox.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4B2D87A53C; Thu, 9 Aug 2018 15:36:56 +0000 (UTC) (envelope-from fbsd@www.zefox.net) Received: from www.zefox.net (localhost [127.0.0.1]) by www.zefox.net (8.15.2/8.15.2) with ESMTPS id w79FbAjl032957 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 9 Aug 2018 08:37:11 -0700 (PDT) (envelope-from fbsd@www.zefox.net) Received: (from fbsd@localhost) by www.zefox.net (8.15.2/8.15.2/Submit) id w79FbA9g032956; Thu, 9 Aug 2018 08:37:10 -0700 (PDT) (envelope-from fbsd) Date: Thu, 9 Aug 2018 08:37:10 -0700 From: bob prohaska To: Warner Losh Cc: Mark Johnston , "freebsd-arm@freebsd.org" , bob prohaska Subject: Re: RPI3 swap experiments ["was killed: out of swap space" with: "v_free_count: 5439, v_inactive_count: 1"] Message-ID: <20180809153710.GC30347@www.zefox.net> References: <6BFE7B77-A0E2-4FAF-9C68-81951D2F6627@yahoo.com> <20180802002841.GB99523@www.zefox.net> <20180802015135.GC99523@www.zefox.net> <20180806155837.GA6277@raichu> <20180808153800.GF26133@www.zefox.net> <20180808204841.GA19379@raichu> <20180809065648.GB30347@www.zefox.net> <20180809152152.GC68459@raichu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: "Porting FreeBSD to ARM processors." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Aug 2018 15:36:57 -0000 On Thu, Aug 09, 2018 at 09:28:09AM -0600, Warner Losh wrote: > On Thu, Aug 9, 2018 at 9:21 AM, Mark Johnston wrote: > > > On Wed, Aug 08, 2018 at 11:56:48PM -0700, bob prohaska wrote: > > > On Wed, Aug 08, 2018 at 04:48:41PM -0400, Mark Johnston wrote: > > > > On Wed, Aug 08, 2018 at 08:38:00AM -0700, bob prohaska wrote: > > > > > The patched kernel ran longer than default but OOMA still halted > > buildworld around > > > > > 13 MB. That's considerably farther than a default build world have > > run but less than > > > > > observed when setting vm.pageout_oom_seq=120 alone. Log files are at > > > > > http://www.zefox.net/~fbsd/rpi3/swaptests/r337226M/ > > 1gbsdflash_1gbusbflash/batchqueue/ > > > > > > > > > > Both changes are now in place and -j4 buildworld has been restarted. > > > > > > > > Looking through the gstat output, I'm seeing some pretty abysmal > > average > > > > write latencies for da0, the flash drive. I also realized that my > > > > reference to r329882 lowering the pagedaemon sleep period was wrong - > > > > things have been this way for much longer than that. Moreover, as you > > > > pointed out, bumping oom_seq to a much larger value wasn't quite > > > > sufficient. > > > > > > > > I'm curious as to what the worst case swap I/O latencies are in your > > > > test, since the average latencies reported in your logs are high enough > > > > to trigger OOM kills even with the increased oom_seq value. When the > > > > current test finishes, could you try repeating it with this patch > > > > applied on top? https://people.freebsd.org/~ > > markj/patches/slow_swap.diff > > > > That is, keep the non-default oom_seq setting and modification to > > > > VM_BATCHQUEUE_SIZE, and apply this patch on top. It'll cause the > > kernel > > > > to print messages to the console under certain conditions, so a log of > > > > console output will be interesting. > > > > > > The run finished with a panic, I've collected the logs and terminal > > output at > > > http://www.zefox.net/~fbsd/rpi3/swaptests/r337226M/ > > 1gbsdflash_1gbusbflash/batchqueue/pageout120/slow_swap/ > > > > > > There seems to be a considerable discrepancy between the wait times > > reported > > > by the patch and the wait times reported by gstat in the first couple of > > > occurrences. The fun begins at timestamp Wed Aug 8 21:26:03 PDT 2018 in > > > swapscript.log. > > > > The reports of "waited for swap buffer" are especially bad: during those > > periods, the laundry thread is blocked waiting for in-flight swap writes > > to finish before sending any more. Because the system is generally > > quite starved for clean pages that it can reuse, it's relying on swap > > I/O to clean more. If that fails, the system eventually has no choice > > but to start killing processes (where the time period corresponding to > > "eventually" is determined by vm.pageout_oom_seq). > > > > > Based on these latencies, I think the system is behaving more or less as > > expected from the VM's perspective. I do think the default oom_seq value > > is too low and will get that addressed in 12.0. > > > Yea. I think we need to take a more active role in managing latencies on > some cards. Properly managed, they won't climb that high. Since there's no > tagged queueing to these devices, there's an I/O depth of one. The default > policy is to do them in order (since it's flash) which means that processes > that machine-gun down requests swamp everybody else and do > back-to-back-to-back writes which, at least for the few drives I have > looked at in detail tends to induce pathological behavior. > There's a kernel building now with options CAM_IOSCHED_DYNAMIC in the config file. Is it still worth trying? Anything else to try? With my thanks, bob prohaska