From owner-freebsd-current@FreeBSD.ORG Wed Dec 22 02:11:56 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CC74116A4CF for ; Wed, 22 Dec 2004 02:11:56 +0000 (GMT) Received: from pooker.samsco.org (pooker.samsco.org [168.103.85.57]) by mx1.FreeBSD.org (Postfix) with ESMTP id 17B5B43D55 for ; Wed, 22 Dec 2004 02:11:56 +0000 (GMT) (envelope-from scottl@freebsd.org) Received: from [192.168.254.12] (g4.samsco.home [192.168.254.12]) (authenticated bits=0) by pooker.samsco.org (8.12.11/8.12.10) with ESMTP id iBM0L1xh080609; Tue, 21 Dec 2004 17:21:02 -0700 (MST) (envelope-from scottl@freebsd.org) Message-ID: <41C8BD1C.9090507@freebsd.org> Date: Tue, 21 Dec 2004 17:17:32 -0700 From: Scott Long User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.7) Gecko/20040514 X-Accept-Language: en-us, en MIME-Version: 1.0 To: JINMEI Tatuya References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, hits=0.0 required=3.8 tests=none autolearn=no version=2.63 X-Spam-Checker-Version: SpamAssassin 2.63 (2004-01-11) on pooker.samsco.org cc: current@freebsd.org Subject: Re: BIND9 performance issues with SMP X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Dec 2004 02:11:57 -0000 JINMEI Tatuya / 神明達哉 wrote: > Hello, > > I was recently playing with FreeBSD 5.3's SMP kernel and BIND9 to > measure the response performance using multiple threads. Perhaps this > is already well-known, but the result showed using threads with > FreeBSD 5.3 didn't improve the performance (rather, it actually > degraded the performance as we increased threads/CPUs). > > In very short, it doesn't make sense to enable threading on FreeBSD in > any case (even with multiple CPUs). > > I'm going to describe what I found in this experience in detail. I > hope some of the followings contain new information in order to > improve FreeBSD's SMP support in general. > > - tested environments > OS: FreeBSD 5.3 beta 7 and RC1 (I believe the result should be the > same with 5.3-RELEASE) > Machine: Xeon 700MHz x 4 / Xeon 3000MHz x 4 > BIND version: 9.3.0, built with --enable-threads > > - measurement description > named loaded the root zone file (as of around May 2003). I measured > the response performance as query-per-second (qps) with the queryperf > program (which comes with the BIND9 distributions). queryperf asked > various host names which are randomly generated (some of the host > names result in NXDOMAIN). All the numbers below show the resulting > qps for a 30-second test. > > - some general observations from the results > > 1. BIND 9.3.0 does not create worker threads with the > PTHREAD_SCOPE_SYSTEM attribute. For FreeBSD, this means > different worker threads cannot run on different CPUs, so > multi-threading doesn't help anything in terms of performance. This isn't really true. All it means is that each thread will use whatever scheduler activation is available to the UTS at the time instead of having its own dedicated scheduler activation. The whole theory behind SA/KSE is that scheduling threads from the userland should be cheaper than from the kernel, and SA provides the benefit of making more than 1 scheduling resource available to the userland scheduler, unlike libc_r. In practice, it looks like something broke in between 5.2 and 5.3 where process scope threads behave very strangely, almost like the UTS is acting like it only has one scheduling resource to work with. The result is that performance degrades to that of libc_r, except that threads that block in the kernel don't block the whole process. I keep on hoping that Dan or Julian or David will have time to look at this, but that hasn't come to be yet, unfortunately. I'd consider it a very high-priority bug, though. > > 2. generally, BIND9 requires lots of mutex locks to process a single > DNS query, causing many lock contentions. The contentions > degrade response performance very much. This is true to some > extent for any OSes, but lock contentions seem particularly heavy > on FreeBSD (see also item 4 below). > > 3. the SMP support in the kernel generally performs well in terms of > UDP input/output on a single socket. However, the kernel uses a > giant lock for socket send buffer in the sosend() function, which > can be a significant performance bottleneck with high-performance > CPUs (the bottleneck was not revealed with 700MHz processors, but > did appear with 3000MHz CPUs). It seems to me we can safely > avoid the bottleneck for DNS servers, since UDP output does not > use socket send buffer. I've made a quick-hack patch in the > FreeBSD kernel and confirmed that this is the case. (For those > who are particularly interested in this patch, it's available at: > http://www.jinmei.org/freebsd5.3-sosend-patch . > A new socket option SO_FAST1 on a UDP socket enables the > optimization) Very interesting! > > 4. mutex contentions are VERY expensive (looks like much much more > expensive than other OSes), while trying to get a lock without a > contention is reasonably cheap. (Almost) whenever a user thread > blocks due to a lock contention, it is suspended with a system > call (kse_release), probably causing context switch. (I'm not > really sure if the system call overhead is the main reason of the > performance penalty though.) This might be related to what I said above. Where you observing this with process scope or system scope threads? Again, if scheduling decisions are not cheap in the UTS then there really is no point to SA/KSE. > > 5. some standard libraries internally call pthread_mutex_lock(), > which can also make the server slow due to the expensive > contention tax. Regarding BIND9, malloc() and arc4random() can > be a heavy bottleneck (the latter is called for every query if we > use the "random" order for RRsets). > > 6. at least so far, the ULE scheduler doesn't help improve the > performance (it even performs worse than the normal 4BSD > scheduler). With both types of threads? Have you tried Jeff's recent fixes to ULE? Unfortunately we saw similar performance problems over the summer, and that contributed to switching off of the ULE scheduler. Hopefully this situation improves. > > - experiments with possible optimizations > > Based on the above results, I've explored some optimizations to > improve the performance. The first-level optimization is to create > worker threads with PTHREAD_SCOPE_SYSTEM and to avoid using malloc(3) > in the main part of query processing. Let's call this version > "BIND+". I also tried eliminating any possible mutex contentions in > the main part of query processing (it depends on some unrealistic > assumptions, so we cannot use this code in actual operation). This > optimization is called "BIND++". BIND++ also contains the > optimizations of BIND+. Additionally, I've made a quick patch to the > kernel source code so that sosend() does not lock the socket send > buffer for some particular UDP packets. > > The followings are the test results with these optimizations: > > A. tests with FreeBSD 5.3 beta 7 on Xeon 700MHz x 4 > > threads BIND BIND+ BIND++ > 0 4818 > 1 3021 3390 4474 > 2 1859 2496 7781 > 3 986 1450 10615 > 4 774 1167 12668 > > Note: "BIND" is pure BIND 9.3.0. "0 threads" mean the result > without threading. Numbers in the table body show the resulting > qps's. > > While 9.3.0+ ran much better than pure 9.3.0, it still performed > quite poorly. However, we can achieve the real benefit of > multi-threading/CPUs with BIND++. This result shows if we can > control mutex contentions in BIND9 by some realistic way, BIND can > run faster on multiple CPUs with FreeBSD. > > B. tests with FreeBSD 5.3 RC1 on Xeon 3000MHz x 4 > > threads BIND BIND++ BIND++ > 0 16253 kernel_patch > 1 7953 14600 14438 > 2 3591 19840 23854 > 3 1012 24268 30268 > 4 533 25447 30434 > > Note: "BIND++kernel_patch" means BIND++ with the kernel optimization > I mentioned in item 3 above. > > The results show even the full optimization in the application side > is not enough with high-speed CPUs. Further kernel optimization can > help in this area. The performance was still saturated with around > 4 CPUs. I could not figure out the reason at that time. > > C. (for comparison) SuSE Linux (kernel 2.6.4, glibc 2.3.3) on the > same box I used with experiment B > > threads BIND BIND++ > 0 16117 > 1 13707 17835 > 2 16493 26946 > 3 16478 32688 > 4 14517 36090 > > While "pure BIND9" does not provide better performance with multiple > CPUs either (and the optimizations in BIND++ are equally effective), > the penalty with multiple threads is much smaller. I guess this is > because Linux handles lock contentions much better than FreeBSD. > Do you have any comparisons to NetBSD or Solaris? Comparing to Linux often results in comparing apples to oranges since there is long-standing suspicion that Linux cuts corners where BSD does not. Also, would you be able to re-run your tests using the THR thread package? Scott