From owner-freebsd-arch@FreeBSD.ORG Mon May 21 09:32:00 2007 Return-Path: X-Original-To: arch@FreeBSD.org Delivered-To: freebsd-arch@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 4176116A41F; Mon, 21 May 2007 09:32:00 +0000 (UTC) (envelope-from jroberson@chesapeake.net) Received: from webaccess-cl.virtdom.com (webaccess-cl.virtdom.com [216.240.101.25]) by mx1.freebsd.org (Postfix) with ESMTP id E51FC13C489; Mon, 21 May 2007 09:31:59 +0000 (UTC) (envelope-from jroberson@chesapeake.net) Received: from [192.168.1.101] (c-71-231-138-78.hsd1.or.comcast.net [71.231.138.78]) (authenticated bits=0) by webaccess-cl.virtdom.com (8.13.6/8.13.6) with ESMTP id l4L9Vvl5074944 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO); Mon, 21 May 2007 05:31:58 -0400 (EDT) (envelope-from jroberson@chesapeake.net) Date: Mon, 21 May 2007 02:31:54 -0700 (PDT) From: Jeff Roberson X-X-Sender: jroberson@10.0.0.1 To: Attilio Rao In-Reply-To: <4651CE2F.8080908@FreeBSD.org> Message-ID: <20070521022847.D679@10.0.0.1> References: <20070520155103.K632@10.0.0.1> <20070521113648.F86217@besplex.bde.org> <20070520213132.K632@10.0.0.1> <4651CAB8.8070007@FreeBSD.org> <4651CE2F.8080908@FreeBSD.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: arch@FreeBSD.org, Bruce Evans Subject: Re: sched_lock && thread_lock() X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 May 2007 09:32:00 -0000 On Mon, 21 May 2007, Attilio Rao wrote: > Attilio Rao wrote: >> Jeff Roberson wrote: >>> >>> On Mon, 21 May 2007, Bruce Evans wrote: >>> >>>> On Sun, 20 May 2007, Jeff Roberson wrote: >>>> >>>>> Attilio and I have been working on addressing the increasing problem of >>>>> sched_lock contention on -CURRENT. Attilio has been addressing the >>>>> parts of the kernel which do not need to fall under the scheduler lock >>>>> and moving them into seperate locks. For example, the ldt/gdt lock and >>>>> clock lock which were committed earlier. Also, using atomics for the >>>>> vmcnt structure. >>>> >>>> Using atomics in the vmmeter struct is mostly just a pessimization and >>>> and obfuscation, since locks are still needed for accesses to more >>>> than one variable at a time. For these cases, locks are needed for >>> >>> You are right, there are some cases which this pessimized. I wanted to >>> make sure the cnt members that previously were protected by the sched_lock >>> were still correct. However, I overlooked some of these which were >>> accessed many at a time. What should happen is we should find out if any >>> locks do protect the remaining members and if so, don't use VMCNT*, but >>> mark the header describing how they are protected. >> >> Sorry, but I strongly disagree. > > Ah, and about consistency of functions your previously described, I assume > nothing vital is linked to it. > vmmeter is just a statistics collector and nothing else, so I don't expect > nothing critical/vital happens from its fields (I'm sure a lot of variables > are just bumped up and never decreased, for example). If that really happens > we should fix that behaviour really, instead than making things a lot > heavier. Well, Attilio is right that in most cases using a lock to save a few atomics is going to be more expensive. There is also the procedural cost of the lock and the cache miss etc. However, in some cases there is already a lock available that is protecting the counter. Furthermore, there are a few cases, most notably paging targets, where code depends on the value of the counters. For most fields, I believe we have a good approach, however, there are a few that could be minorly improved. The question is whether it's worth inconsistently accessing the counters to save a few atomics which likely have an immeasurable performance impact. Thanks, Jeff > > Thanks, > Attilio >