From owner-freebsd-smp@FreeBSD.ORG Sun May 20 23:11:33 2007 Return-Path: X-Original-To: smp@freebsd.org Delivered-To: freebsd-smp@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BE7CE16A421; Sun, 20 May 2007 23:11:33 +0000 (UTC) (envelope-from jroberson@chesapeake.net) Received: from webaccess-cl.virtdom.com (webaccess-cl.virtdom.com [216.240.101.25]) by mx1.freebsd.org (Postfix) with ESMTP id 70C3613C43E; Sun, 20 May 2007 23:11:33 +0000 (UTC) (envelope-from jroberson@chesapeake.net) Received: from [192.168.1.101] (c-71-231-138-78.hsd1.or.comcast.net [71.231.138.78]) (authenticated bits=0) by webaccess-cl.virtdom.com (8.13.6/8.13.6) with ESMTP id l4KNBVIf003485 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO); Sun, 20 May 2007 19:11:32 -0400 (EDT) (envelope-from jroberson@chesapeake.net) Date: Sun, 20 May 2007 16:11:29 -0700 (PDT) From: Jeff Roberson X-X-Sender: jroberson@10.0.0.1 To: smp@freebsd.org, threads@freebsd.org, performance@freebsd.org Message-ID: <20070520161051.L632@10.0.0.1> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Subject: sched_lock && thread_lock() (fwd) X-BeenThere: freebsd-smp@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: FreeBSD SMP implementation group List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 May 2007 23:11:33 -0000 In case any of you missed it, I sent this mail to arch@. Please keep the discussion there. Thanks, Jeff ---------- Forwarded message ---------- Date: Sun, 20 May 2007 16:07:53 -0700 (PDT) From: Jeff Roberson To: arch@freebsd.org Subject: sched_lock && thread_lock() Attilio and I have been working on addressing the increasing problem of sched_lock contention on -CURRENT. Attilio has been addressing the parts of the kernel which do not need to fall under the scheduler lock and moving them into seperate locks. For example, the ldt/gdt lock and clock lock which were committed earlier. Also, using atomics for the vmcnt structure. I have been working on an approach to using thread locks rather than a global scheduler lock. The design is similar to Solaris's container locks, but the details are different. The basic idea is to have a pointer in the thread structure that points at a spinlock that protects the thread. This spinlock may be one of the scheduler lock, a turnstile lock, or a sleep queue lock. As the thread changes state from running to blocked on a lock or sleeping the lock changes with it. This has several advantages. The majority of the kernel simply calls thread_lock() which figures out the details. The kernel then knows nothing of the particulars of the scheduler locks, and the schedulers are free to implement them in any way that they like. Furthermore, in some cases the locking is reduced, because locking the thread has the side effect of locking the container. This patch does not implement per-cpu scheduler locks. It just changes the kernel to support this model. I have a fork of ULE in development that runs with per-cpu locks, but it is not ready yet. This means that there should be very little change in system performance until the scheduler catches up. In fact, on a 2cpu system the difference is immeasurable or almost so on every workload I have tested. On an 8way opteron system the results vary between +10% on some reasonable workloads and -15% on super-smack, which has some inherent problems that I believe are not exposing real performance problems with this patch. This has also been tested extensively by Kris and myself on a variety of machines and I believe it to be fairly solid. The only thing remaining to do is fix rusage so that it does not rely on a global scheduler lock. I am posting the patch here in case anyone with specific knowledge of turnstiles, sleepqueues, or signals would like to review it, and as a general heads up to people interested in where the kernel is headed. This will apply to current just prior to my kern_clock.c commits. I will re-merge and update again in the next few days, probably after we sort out rusage. http://people.freebsd.org/~jeff/threadlock.diff Thanks, Jeff _______________________________________________ freebsd-arch@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-arch To unsubscribe, send any mail to "freebsd-arch-unsubscribe@freebsd.org"