From owner-freebsd-current Sun Oct 8 12:24:48 1995 Return-Path: owner-current Received: (from root@localhost) by freefall.freebsd.org (8.6.12/8.6.6) id MAA09495 for current-outgoing; Sun, 8 Oct 1995 12:24:48 -0700 Received: from localhost.cdrom.com (localhost.cdrom.com [127.0.0.1]) by freefall.freebsd.org (8.6.12/8.6.6) with SMTP id MAA09488 for ; Sun, 8 Oct 1995 12:24:45 -0700 X-Authentication-Warning: freefall.freebsd.org: Host localhost.cdrom.com didn't use HELO protocol To: current Subject: phkmalloc/2 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: <9484.813180284.1@freefall.freebsd.org> Date: Sun, 08 Oct 1995 12:24:45 -0700 Message-ID: <9485.813180285@freefall.freebsd.org> From: Poul-Henning Kamp Sender: owner-current@FreeBSD.org Precedence: bulk I have just commited phkmalloc/2 to -current. You will find it in src/lib/lbic/stdlib/malloc.[c3] Before you run any benchmarks on it, be sure to turn "EXTRA_SANITY" off! EXTRA_SANITY will be enabled for the next couple of weeks, to catch any problems, but it slows down malloc quite a bit. EXTRA_SANITY also sets the junk" option to on. This means that all memory returned by malloc will contain 0xd0 and that most programs which rely on it being zero'ed will core-dump sooner or later. (You can change the 0xd0, and if you find a good setting that produces easier-to-understand coredumps, please tell me. Unfortunately the most common coredump caused by 0xd0 seems to have trashed the stack badly :-( ) If you find a program which coredumps, before you complain that my malloc is buggy, try these: setenv MALLOC_OPTIONS j setenv MALLOC_OPTIONS Z if either of these prevent the core-dump, the program bogusly relies on malloc to return zero'ed storage, and you need to fix the program. Here are some benchmarks: # In-core test, smaller, faster. ./malloc 50000000 2000 8192 159.2u 1.5s 2:41.85 99.3% 5+7742k 0+0io 0pf+0w ./gnumalloc 50000000 2000 8192 272.6u 0.4s 4:35.01 99.3% 5+8533k 0+0io 0pf+0w # More-than-core test, smaller, a LOT faster. ./malloc 500000 14000 8192 6.5u 4.1s 4:08.87 4.3% 5+49209k 0+0io 9772pf+0w ./gnumalloc 500000 14000 8192 16.2u 14.5s 15:36.14 3.2% 5+54100k 0+0io 47651pf+0w # Small-requests test, slightly slower and bigger. ./malloc 20000000 20000 2048 67.0u 0.3s 1:07.83 99.2% 5+18199k 0+0io 4pf+0w ./gnumalloc 20000000 20000 2048 66.2u 0.3s 1:07.03 99.3% 5+18107k 0+0io 0pf+0w I'm very interested in numbers, observations and feedback. Enjoy! Poul-Henning