From owner-freebsd-arch@FreeBSD.ORG Wed May 23 22:56:45 2007 Return-Path: X-Original-To: arch@freebsd.org Delivered-To: freebsd-arch@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 267F516A421 for ; Wed, 23 May 2007 22:56:45 +0000 (UTC) (envelope-from jroberson@chesapeake.net) Received: from webaccess-cl.virtdom.com (webaccess-cl.virtdom.com [216.240.101.25]) by mx1.freebsd.org (Postfix) with ESMTP id F3F8813C489 for ; Wed, 23 May 2007 22:56:44 +0000 (UTC) (envelope-from jroberson@chesapeake.net) Received: from [192.168.1.101] (c-71-231-138-78.hsd1.or.comcast.net [71.231.138.78]) (authenticated bits=0) by webaccess-cl.virtdom.com (8.13.6/8.13.6) with ESMTP id l4NMugG4020543 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO) for ; Wed, 23 May 2007 18:56:43 -0400 (EDT) (envelope-from jroberson@chesapeake.net) Date: Wed, 23 May 2007 15:56:35 -0700 (PDT) From: Jeff Roberson X-X-Sender: jroberson@10.0.0.1 To: arch@freebsd.org In-Reply-To: <20070520155103.K632@10.0.0.1> Message-ID: <20070523155236.U9443@10.0.0.1> References: <20070520155103.K632@10.0.0.1> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Subject: Re: sched_lock && thread_lock() X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 May 2007 22:56:45 -0000 Resuming the original intent of this thread; http://www.chesapeake.net/~jroberson/threadlock.diff I have updated this patch to the most recent current. I have included a scheduler called sched_smp.c that is a copy of ULE using per-cpu scheduler spinlocks. There are also changes to be slightly more agressive with updating the td_lock pointer when it has been blocked. This continues to be stable in testing by myself and Kris Kennaway on 1 to 8 cpu machines. Attilio is working on addressing concerns with the vmmeter diff. It's my fault for not sending this around to arch@ before committing. I apologize. We will have one diff before threadlock goes in to fix rusage such that it doesn't depend on a gobal scheduler lock. I will mail that here for review. After that I intend to commit threadlock. Please complain sooner rather than later! Thanks, Jeff On Sun, 20 May 2007, Jeff Roberson wrote: > Attilio and I have been working on addressing the increasing problem of > sched_lock contention on -CURRENT. Attilio has been addressing the parts of > the kernel which do not need to fall under the scheduler lock and moving them > into seperate locks. For example, the ldt/gdt lock and clock lock which were > committed earlier. Also, using atomics for the vmcnt structure. > > I have been working on an approach to using thread locks rather than a global > scheduler lock. The design is similar to Solaris's container locks, but the > details are different. The basic idea is to have a pointer in the thread > structure that points at a spinlock that protects the thread. This spinlock > may be one of the scheduler lock, a turnstile lock, or a sleep queue lock. > As the thread changes state from running to blocked on a lock or sleeping the > lock changes with it. > > This has several advantages. The majority of the kernel simply calls > thread_lock() which figures out the details. The kernel then knows nothing > of the particulars of the scheduler locks, and the schedulers are free to > implement them in any way that they like. Furthermore, in some cases the > locking is reduced, because locking the thread has the side effect of locking > the container. > > This patch does not implement per-cpu scheduler locks. It just changes the > kernel to support this model. I have a fork of ULE in development that runs > with per-cpu locks, but it is not ready yet. This means that there should be > very little change in system performance until the scheduler catches up. In > fact, on a 2cpu system the difference is immeasurable or almost so on every > workload I have tested. On an 8way opteron system the results vary between > +10% on some reasonable workloads and -15% on super-smack, which has some > inherent problems that I believe are not exposing real performance problems > with this patch. > > This has also been tested extensively by Kris and myself on a variety of > machines and I believe it to be fairly solid. The only thing remaining to do > is fix rusage so that it does not rely on a global scheduler lock. > > I am posting the patch here in case anyone with specific knowledge of > turnstiles, sleepqueues, or signals would like to review it, and as a general > heads up to people interested in where the kernel is headed. > > This will apply to current just prior to my kern_clock.c commits. I will > re-merge and update again in the next few days, probably after we sort out > rusage. > > http://people.freebsd.org/~jeff/threadlock.diff > > Thanks, > Jeff > _______________________________________________ > freebsd-arch@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-arch > To unsubscribe, send any mail to "freebsd-arch-unsubscribe@freebsd.org" >