From owner-freebsd-current@FreeBSD.ORG Thu Mar 12 16:37:19 2015 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 44100104 for ; Thu, 12 Mar 2015 16:37:19 +0000 (UTC) Received: from mail-ie0-x234.google.com (mail-ie0-x234.google.com [IPv6:2607:f8b0:4001:c03::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 06E7D3F0 for ; Thu, 12 Mar 2015 16:37:19 +0000 (UTC) Received: by iegc3 with SMTP id c3so49648258ieg.3 for ; Thu, 12 Mar 2015 09:37:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=i0y6mvLYm4U3r4SJJrY/XLN7dAXmRFGRGpic1ZePviA=; b=d46A2RPu8R74nIO+kqbg+dRezbp0vJjKf37WeU1RTBoaQqpIaUpvqx3/j1JyT6X5Ur 94bKcoLQLEOxSuEEo67fXCakttQ7Ty27ABl0ifE+PZ7wIIwzitb2m/+VhLwf5ED1FRgr 4Yw3DlaFR1uc2cFnc144tC+IS4cqYplZM4m4s00YoLQeUwVWGz4ht61E6a+tLdCbHrcC YfAa2menjJApGSxSxQ2jGuK3Uc4zjrDDPKYZSYq34ccxDKqq6lRpb3p6ob4L6Mg0hcKm U919M/B4MdwAs3TGHPScoKad7xDZlzd8Sof8xSimqPI9mG9cJJL9Sv8eOn5iWkrFFJ8L 423g== MIME-Version: 1.0 X-Received: by 10.107.136.206 with SMTP id s75mr40824711ioi.8.1426178220080; Thu, 12 Mar 2015 09:37:00 -0700 (PDT) Sender: adrian.chadd@gmail.com Received: by 10.36.17.194 with HTTP; Thu, 12 Mar 2015 09:37:00 -0700 (PDT) In-Reply-To: References: Date: Thu, 12 Mar 2015 09:37:00 -0700 X-Google-Sender-Auth: HbXEsv8kopoZ0Pw_lzzuqU7gNtE Message-ID: Subject: Re: [PATCH] Convert the VFS cache lock to an rmlock From: Adrian Chadd To: Ryan Stone Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD Current X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Mar 2015 16:37:19 -0000 Do you have access to any boxes that have more than 12 cores? (like 36, 64, 80+ ?) -adrian On 12 March 2015 at 08:14, Ryan Stone wrote: > I've just submitted a patch to Differential[1] for review that converts the > VFS cache to use an rmlock in place of the current rwlock. My main > motivation for the change is to fix a priority inversion problem that I saw > recently. A real-time priority thread attempted to acquire a write lock on > the VFS cache lock, but there was already a reader holding it. The reader > was preempted by a normal priority thread, and my real-time thread was > starved. > > [1] https://reviews.freebsd.org/D2051 > > > I was worried about the performance implications of the change, as I wasn't > sure how common write operations on the VFS cache would be. I did a -j12 > buildworld/buildkernel test on a 12-core Haswell Xeon system, as I figured > that would be a reasonable stress test that simultaneously creates lots of > small files and reads a lot of files as well. This actually wound up being > about a 10% performance *increase* (the units below are seconds of elapsed > time as measured by /usr/bin/time, so smaller is better): > > $ ministat -C 1 orig.log rmlock.log > x orig.log > + rmlock.log > +------------------------------------------------------------------------------+ > | + x > | > |++++ x x xxx > | > | |A| > |_________A___M____|| > +------------------------------------------------------------------------------+ > N Min Max Median Avg Stddev > x 6 2710.31 2821.35 2816.75 2798.0617 43.324817 > + 5 2488.25 2500.25 2498.04 2495.756 5.0494782 > Difference at 95.0% confidence > -302.306 +/- 44.4709 > -10.8041% +/- 1.58935% > (Student's t, pooled s = 32.4674) > > The one outlier in the rwlock case does confuse me a bit. What I did was > booted a freshly-built image with the rmlock lock applied, did a git > checkout of head, and then did 5 builds in a row. The git checkout should > have had the effect of priming the disk cache with the source files. Then > I installed the stock head kernel, rebooted, and ran 5 more builds (and > then 1 more when I noticed the outlier). The fast outlier was the *first* > run, which should have been running with a cold disk cache, so I really > don't know why it would be 90 seconds faster. I do see that this run also > had about 500-600 fewer seconds spent in system time: > > x orig.log > +------------------------------------------------------------------------------+ > | > x | > |x x x > xx | > | > |_________________________A__________M_____________|| > +------------------------------------------------------------------------------+ > N Min Max Median Avg Stddev > x 6 3515.23 4121.84 4105.57 4001.71 239.61362 > > I'm not sure how much that I care, given that the rmlock is universally > faster (but maybe I should try the "cold boot" case anyway). > > If anybody had any comments or further testing that they would like to see, > please let me know. > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"