From owner-freebsd-arm@freebsd.org Tue Aug 14 12:43:12 2018 Return-Path: Delivered-To: freebsd-arm@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6CCE1107701F for ; Tue, 14 Aug 2018 12:43:12 +0000 (UTC) (envelope-from warlock@phouka1.phouka.net) Received: from phouka1.phouka.net (phouka1.phouka.net [107.170.196.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "phouka.net", Issuer "Go Daddy Secure Certificate Authority - G2" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 03D0D8D4C6; Tue, 14 Aug 2018 12:43:11 +0000 (UTC) (envelope-from warlock@phouka1.phouka.net) Received: from phouka1.phouka.net (localhost [127.0.0.1]) by phouka1.phouka.net (8.15.2/8.15.2) with ESMTPS id w7ECfsvn047288 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 14 Aug 2018 05:41:54 -0700 (PDT) (envelope-from warlock@phouka1.phouka.net) Received: (from warlock@localhost) by phouka1.phouka.net (8.15.2/8.15.2/Submit) id w7ECfrwp047287; Tue, 14 Aug 2018 05:41:53 -0700 (PDT) (envelope-from warlock) Date: Tue, 14 Aug 2018 05:41:53 -0700 From: John Kennedy To: bob prohaska Cc: Mark Millard , Mark Johnston , freebsd-arm Subject: Re: RPI3 swap experiments (grace under pressure) Message-ID: <20180814124153.GF81324@phouka1.phouka.net> References: <20180809175802.GA32974@www.zefox.net> <20180812173248.GA81324@phouka1.phouka.net> <20180812224021.GA46372@www.zefox.net> <20180813021226.GA46750@www.zefox.net> <0D8B9A29-DD95-4FA3-8F7D-4B85A3BB54D7@yahoo.com> <20180813185350.GA47132@www.zefox.net> <20180814014226.GA50013@www.zefox.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180814014226.GA50013@www.zefox.net> User-Agent: Mutt/1.10.1 (2018-07-13) X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: "Porting FreeBSD to ARM processors." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Aug 2018 12:43:12 -0000 On Mon, Aug 13, 2018 at 06:42:26PM -0700, bob prohaska wrote: > I understand that the RPi isn't a primary platform for FreeBSD. > But, decent performance under overload seems like a universal > problem that's always worth solving, ... I don't think anything we're talking about explicitly rules out the RPI as a platform, one way or the other, except in it's ability to soak up abuse. I think what you're pitching is basically a scheduling change (with what tasks get run, with a not insignificant trickle-down to how they're swapping). I think the "general case" is "plan your load so everything runs in RAM" but knowing that there is generally a 80/20 rule (don't get hung up on specific numbers -- there is a line, I'm sure it'll move around) of memory that doesn't need to stay resident to memory that does. Oversubscription. And as far as the scheduler goes, it just ends up with a mess of dynamic needs. Personally, I consider swap as a kludge and a type of overdraft protection. I'm writing a bunch a bunch of checks and I hope they won't all get cached (sorry, pun) at the same time but sometimes that is beyond my control. > There's at least some degree of conflict between all of them, > made worse when the workload grows beyond the design assumptions. > The RPI makes the issue more visible, but it's always lurking. I think slow peripherals and lack of memory are the targets. I'd never stick my swap onto a something like a USB card if I didn't have to. > OOMA seems to sacrifice getting work done, potentially entirely, > in support of keeping the system responsive and under control. I'm not a fan of OOM-death, but I think I understand the logic. It would be awesome if there was a scheduling algorithm that could balance everything happily, but I think this tradeoff basically boils down to responsiveness (is process A getting CPU time) against oversubscription of resources. "The beatings will continue until morale improves", except we're talking about processes being taken out back and shot in the head. Killing them seems extreme and somewhat arbitrary, but I'd quickly degenerate into a list of what I'd prefer was killed and the code mess that might be to try and implement it. I can easily imagine scenarios with basically a swap deadlock (no way to get a process into RAM to run in order to be "responsive") and then have to make a decision: Kill this thing that is basically hung or let it stick around indefinitely? And how many times do you do that before ALL swap is exhausted? We're not talking about checking your malloc() return code, we're talking about not even being able to grow a stack to make that call. So I can see myself with OOM-killings as a last-ditch defense against total system failure (and knowing that any OOM-killing might end up as a not-failed but useless system if a vital service is sacrificed). We're just talking about a knob to control some threshold where it becomes palatable. I don't think there is enough info to make the kind of informed decision we'd like the scheduler to make.