From owner-freebsd-hackers@FreeBSD.ORG Mon Nov 18 12:57:10 2013 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F2791286; Mon, 18 Nov 2013 12:57:09 +0000 (UTC) Received: from mail-ee0-x229.google.com (mail-ee0-x229.google.com [IPv6:2a00:1450:4013:c00::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3ED0B284D; Mon, 18 Nov 2013 12:57:09 +0000 (UTC) Received: by mail-ee0-f41.google.com with SMTP id t10so1216942eei.14 for ; Mon, 18 Nov 2013 04:57:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=6yV6N5N0Giemz4fI7O2fKbBixJGKUz5MaPhHRdIz0Js=; b=Jym+V3dyXUgWsnLHNYMAQYfyDAHnsvxNBudPJV4AJLLL50idcHR9W3e3chPrlEBuKQ FYng6MJSNcipFMnyyu9k6hT8JxtD7aPUViAINPqk4u/4mTnmNaD72m4oFfkwbgoVjRsr ErZPLs4TVvcpo4XzwZJ688GzjUYyGNRF0m43VOq6ZOQFLMlHS0KH1dv6u7+qdNdhcVyp PndiFPs31T5ZtC/P9NlC0J0KNhC+6GZqsLsLvXscsDT/2OHNSaCobLIJEZnBkE685Vnl 4dCB+s3KjNap0HF/Xe2vVg2RGviq8kMJEm6pCiamcWedgEZzCNcZGzdTGRxPF58wNAz+ Jmww== X-Received: by 10.14.113.137 with SMTP id a9mr12600546eeh.3.1384779427676; Mon, 18 Nov 2013 04:57:07 -0800 (PST) Received: from mavbook.mavhome.dp.ua ([178.137.150.35]) by mx.google.com with ESMTPSA id 44sm37646908eek.5.2013.11.18.04.57.05 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 18 Nov 2013 04:57:06 -0800 (PST) Sender: Alexander Motin Message-ID: <528A0EA0.3040408@FreeBSD.org> Date: Mon, 18 Nov 2013 14:57:04 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Adrian Chadd Subject: Re: UMA cache back pressure References: <52894C92.60905@FreeBSD.org> <5289DBF9.80004@FreeBSD.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "freebsd-hackers@freebsd.org" , "freebsd-current@freebsd.org" X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Nov 2013 12:57:10 -0000 On 18.11.2013 14:10, Adrian Chadd wrote: > On 18 November 2013 01:20, Alexander Motin wrote: >> On 18.11.2013 10:41, Adrian Chadd wrote: >>> So, do you get any benefits from just the first one, or first two? >> >> I don't see much reason to handle that in pieces. As I have described above, >> each part has own goal, but they much better work together. > > Well, with changes like this, having them broken up and committed in > small pieces make it easier for people to do regression testing with. > > If you introduce some regression in a particular workload then the > user or developer is only going to find that it's this patch and won't > necessarily know how to break it down into pieces to see which piece > actually introduced the regression in their specific workload. I can't argue here, but too many small pieces turning later merging into a headache. This patch is not that big to not be reviewable at one piece. What's about better commit message -- your hint accepted. :) > I totally agree that this should be done! It just does seem to be > something that could be committed in smaller pieces quite easily so to > make potential debugging later on down the road much easier. Each > commit builds on the previous commit. > > So, something like (in order): > > * add two new buckets, here's why > * fix locking, here's why > * soft back pressure > * aggressive backpressure I can do that it you insist, I would just take different order (3,1,4,2). 2 without 3 will make buckets grow faster, that may be bad without back pressure. > Did you get profiling traces from the VM free paths? Is it because > it's churning the physical pages through the VM physical allocator? > or? Yes. Without use_uma enabled I've seen up to 50% of CPU time burned on locks held around expensive VM magic such as TLB shutdown, etc. With use_uma enabled situation improved a lot, but I've seen periodical bursts, which I guess happened when system was getting low on memory and started aggressively purge gigabytes of oversized caches. With this patch I haven't noticed such behavior so far at all, though it may be subjective since test runs quite some time and load is not very stationary. -- Alexander Motin