From owner-freebsd-current@FreeBSD.ORG Mon Mar 3 22:19:10 2008 Return-Path: Delivered-To: current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 785D310657B4; Mon, 3 Mar 2008 22:19:10 +0000 (UTC) (envelope-from kris@FreeBSD.org) Received: from weak.local (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 8C73A8FC15; Mon, 3 Mar 2008 22:19:09 +0000 (UTC) (envelope-from kris@FreeBSD.org) Message-ID: <47CC795C.9050600@FreeBSD.org> Date: Mon, 03 Mar 2008 23:19:08 +0100 From: Kris Kennaway User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Bakul Shah References: <20080303211731.D89A05B30@mail.bitblocks.com> In-Reply-To: <20080303211731.D89A05B30@mail.bitblocks.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: gnn@freebsd.org, current@freebsd.org Subject: Re: Differences in malloc between 6 and 7? X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Mar 2008 22:19:10 -0000 Bakul Shah wrote: > On Mon, 03 Mar 2008 15:23:33 EST gnn@freebsd.org wrote: >> One of the folks I'm working with found this. The following code, >> which yes, is just an example, is 1/2 as fast on 7.0-RELEASE as on >> 6.3. Where should I look to find out why? > > Specifying malloc option K (double virtual memory chunk size) > roughly halves the runtime. Additional Ks reduce it more and > more. As this test spends most of its time in the kernel, > may be this is just due to the mmap overhead? Or may be the > defaults for 6.3 were different. Well, the whole architecture of malloc is different. I also see a big performance cliff (drops by a factor of 10) when malloc size exceeds the chunk size (1MB by default). Also concurrent access to mmapped memory performs badly in FreeBSD right now. I have patches that convert the vm_map lock to a sx to avoid this contention, but they need some more work. Kris