From owner-freebsd-hackers@FreeBSD.ORG Thu Jun 30 17:42:11 2005 Return-Path: X-Original-To: hackers@freebsd.org Delivered-To: freebsd-hackers@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E835316A41F for ; Thu, 30 Jun 2005 17:42:11 +0000 (GMT) (envelope-from jmg@hydrogen.funkthat.com) Received: from hydrogen.funkthat.com (gate.funkthat.com [69.17.45.168]) by mx1.FreeBSD.org (Postfix) with ESMTP id 41D1A43D48 for ; Thu, 30 Jun 2005 17:42:11 +0000 (GMT) (envelope-from jmg@hydrogen.funkthat.com) Received: from hydrogen.funkthat.com (localhost.funkthat.com [127.0.0.1]) by hydrogen.funkthat.com (8.13.3/8.13.3) with ESMTP id j5UHg9XL089212; Thu, 30 Jun 2005 10:42:09 -0700 (PDT) (envelope-from jmg@hydrogen.funkthat.com) Received: (from jmg@localhost) by hydrogen.funkthat.com (8.13.3/8.13.3/Submit) id j5UHg8Fl089211; Thu, 30 Jun 2005 10:42:08 -0700 (PDT) (envelope-from jmg) Date: Thu, 30 Jun 2005 10:42:08 -0700 From: John-Mark Gurney To: ant Message-ID: <20050630174208.GL727@funkthat.com> Mail-Followup-To: ant , hackers@freebsd.org References: <000d01c57cf7$b9b6f9f0$29931bd9@ertpc> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <000d01c57cf7$b9b6f9f0$29931bd9@ertpc> User-Agent: Mutt/1.4.2.1i X-Operating-System: FreeBSD 5.4-RELEASE-p1 i386 X-PGP-Fingerprint: B7 EC EF F8 AE ED A7 31 96 7A 22 B3 D8 56 36 F4 X-Files: The truth is out there X-URL: http://resnet.uoregon.edu/~gurney_j/ X-Resume: http://resnet.uoregon.edu/~gurney_j/resume.html Cc: hackers@freebsd.org Subject: Re: hot path optimizations in uma_zalloc() & uma_zfree() X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: John-Mark Gurney List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2005 17:42:12 -0000 ant wrote this message on Thu, Jun 30, 2005 at 01:08 +0300: > I just tryed to make buckets management in perCPU cache like in > Solaris (see paper of Jeff Bonwick - Magazines and Vmem) > and got perfomance gain around 10% in my test program. > Then i made another minor code optimization and got another 10%. > The program just creates and destroys sockets in loop. > > I suppose the reason of first gain lies in increasing of cpu cache hits. > In current fbsd code allocations and freeings deal with > separate buckets. Buckets are changed when one of them > became full or empty first. In Solaris this work is pure LIFO: > i.e. alloc() and free() work with one bucket - the current bucket > (it is called magazine there), that's why cache hit rate is bigger. If you do like the paper does, and use the buckets for allocating buckets, I would recommend you drop the free bucket list from the pool... If bucket allocations are as cheap as they are suppose to be, there is no need to keep a local list of empty buckets.. :) Just following the principal stated in the paper of letting well optimized parts do their part... P.S. I have most of a userland implementation of this done. Since someone else has done kernel, I'll solely target userland for the code now. -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."