From owner-freebsd-hackers Sat Apr 1 11:48:56 1995 Return-Path: hackers-owner Received: (from majordom@localhost) by freefall.cdrom.com (8.6.10/8.6.6) id LAA27067 for hackers-outgoing; Sat, 1 Apr 1995 11:48:56 -0800 Received: from vinkku.hut.fi (vode@vinkku.hut.fi [130.233.245.1]) by freefall.cdrom.com (8.6.10/8.6.6) with ESMTP id LAA27061 for ; Sat, 1 Apr 1995 11:48:54 -0800 Received: (from vode@localhost) by vinkku.hut.fi (8.6.11/8.6.7) id WAA16382; Sat, 1 Apr 1995 22:48:34 +0300 Date: Sat, 1 Apr 1995 22:48:34 +0300 From: Kai Vorma Message-Id: <199504011948.WAA16382@vinkku.hut.fi> To: nate@sneezy.sri.com (Nate Williams) cc: hackers@FreeBSD.org Subject: Re: Two proposals In-Reply-To: <199504011746.KAA19377@trout.sri.MT.net> References: <199504011554.SAA10589@vinkku.hut.fi> <199504011746.KAA19377@trout.sri.MT.net> Reply-To: Kai.Vorma@hut.fi Sender: hackers-owner@FreeBSD.org Precedence: bulk Nate Williams writes: > > o CSRI malloc V1.17 (alpha?) > Don't know about this one It is the Mark Moraes malloc. You can find version 1.17 from ftp.cs.toronto.edu:/pub/moraes I called it CSRI malloc because there wasn't any name for (except "Yet another malloc()" :-) > > > o Malloc-2.5.3b by Doug Lea > > Don't know about this one either. We also need to consider This one resides in gee.cs.oswego.edu:/pub/misc It is very fast and space-efficient. But as I said it broke sed (and destroyed my cnews installation :-( > Is that the CSRI malloc? I wasn't aware of that. As far as slow goes, > benchmarks Righ Murphy did a while ago put it *much* faster than our > current malloc and not much slower than GNU malloc. (And I think once Actually it was slow just in one benchmark (see below) - I haven't tested much. > it was faster than GNU malloc). I'd prefer something that was more > space effecient that wasn't *really* slow vs. something that was a bit > faster but wasn't quite as effecient. Yup. CPU's are fast today bet memory is expensive.. > If you could take the time to test the system using it, I would be > grateful. Also, if the core folks don't mind I'd like to bloat the tree > and bring libmalloc back into the tree so that we could replace the > stock malloc with it. Bringing it into the tree makes it easier for > folks to test it out. I'll try that Moraes malloc soon. Here is some test results using test-programs from Moraes' malloc-distribution. ..vode --------------------------------------------------------------------------- tests/t1.c FreeBSD Malloc break is initially 0x221c break is 0x17ffc (89568 bytes sbrked) after 1000 allocations of 50 break is 0x17ffc (89568 bytes sbrked) after freeing all allocations break is 0x20ffc (126432 bytes sbrked) after allocating 25000 break is initially 0x221c break is 0x1fbffc (2072032 bytes sbrked) after 1000 allocations of 1024 break is 0x1fbffc (2072032 bytes sbrked) after freeing all allocations break is 0x27cffc (2600416 bytes sbrked) after allocating 512000 GnuMalloc break is initially 0x221c break is 0x1a000 (97764 bytes sbrked) after 1000 allocations of 50 break is 0xa000 (32228 bytes sbrked) after freeing all allocations break is 0x11000 (60900 bytes sbrked) after allocating 25000 break is initially 0x221c break is 0x104000 (1056228 bytes sbrked) after 1000 allocations of 1024 break is 0xa000 (32228 bytes sbrked) after freeing all allocations break is 0x87000 (544228 bytes sbrked) after allocating 512000 CSRI (Moraes) malloc break is initially 0x8354 break is 0x1c384 (81968 bytes sbrked) after 1000 allocations of 50 break is 0x1c384 (81968 bytes sbrked) after freeing all allocations break is 0x1c384 (81968 bytes sbrked) after allocating 25000 break is initially 0x8354 break is 0x10964b (1053431 bytes sbrked) after 1000 allocations of 1024 break is 0x10964b (1053431 bytes sbrked) after freeing all allocations break is 0x10964b (1053431 bytes sbrked) after allocating 512000 Malloc-2.5.3b break is initially 0x3e44 break is 0x17e44 (81920 bytes sbrked) after 1000 allocations of 50 break is 0x17e44 (81920 bytes sbrked) after freeing all allocations break is 0x17e44 (81920 bytes sbrked) after allocating 25000 break is initially 0x3e44 break is 0x103e44 (1048576 bytes sbrked) after 1000 allocations of 1024 break is 0x103e44 (1048576 bytes sbrked) after freeing all allocations break is 0x103e44 (1048576 bytes sbrked) after allocating 512000 As we can see the FreeBSD malloc cannot even use returned memory for the last big malloc but it must sbrk() more memory from system.. The GNU Malloc can return most of memory back to the system but I wouldn't expect that on "normal" use because the sbrk-interface is so unflexible. Here is another test (simumalloc.c) /* * To measure the speed of malloc - based on the algorithm described in * "In Search of a Better Malloc" by David G. Korn and Kiem-Phong Vo, * Usenix 1985. This is a vicious test of memory allocation, but does * suffer from the problem that it asks for a uniform distribution of * sizes - a more accurate distribution is a multi-normal distribution * for all applications I've seen. */ FreeBSD malloc + ./simumalloc -d -t 2000 -s 1024 -l 2000 Sbrked 4924348, MaxAlloced 2104624, Wastage 0.57 + ./simumalloc -t 15000 -s 1024 -l 2000 Sbrked 5276604, MaxAlloced 2168296, Wastage 0.59 + ./simumalloc -d -t 5000 -s 512 -l 20 Sbrked 70588, MaxAlloced 21612, Wastage 0.69 + ./simumalloc -T trace -t 5000 -s 512 -l 20 ./simumalloc: -T option needs CSRI malloc + ./simumalloc -d -t 500 -s 512 -l 20 Sbrked 66492, MaxAlloced 20220, Wastage 0.70 + ./simumalloc -d -t 500 -s 512 -l 500 Sbrked 402364, MaxAlloced 268332, Wastage 0.33 + ./simumalloc -d -t 500 -s 512 -a Sbrked 742332, MaxAlloced 525660, Wastage 0.29 2.137s real 0.477s user 1.109s system 74% ./regress GnuMalloc + ./simumalloc -d -t 2000 -s 1024 -l 2000 Sbrked 2831296, MaxAlloced 2104624, Wastage 0.26 + ./simumalloc -t 15000 -s 1024 -l 2000 Sbrked 2929600, MaxAlloced 2168296, Wastage 0.26 + ./simumalloc -d -t 5000 -s 512 -l 20 Sbrked 54208, MaxAlloced 21612, Wastage 0.60 + ./simumalloc -T trace -t 5000 -s 512 -l 20 ./simumalloc: -T option needs CSRI malloc + ./simumalloc -d -t 500 -s 512 -l 20 Sbrked 50112, MaxAlloced 20220, Wastage 0.60 + ./simumalloc -d -t 500 -s 512 -l 500 Sbrked 398272, MaxAlloced 268332, Wastage 0.33 + ./simumalloc -d -t 500 -s 512 -a Sbrked 734144, MaxAlloced 525660, Wastage 0.28 2.192s real 0.756s user 0.873s system 74% ./regress CSRI-malloc + ./simumalloc -d -t 2000 -s 1024 -l 2000 Sbrked 2324079, MaxAlloced 2104624, Wastage 0.09 + ./simumalloc -t 15000 -s 1024 -l 2000 Sbrked 2504426, MaxAlloced 2168296, Wastage 0.13 + ./simumalloc -d -t 5000 -s 512 -l 20 Sbrked 28693, MaxAlloced 21612, Wastage 0.25 + ./simumalloc -T trace -t 5000 -s 512 -l 20 ./simumalloc: -T option needs CSRI malloc + ./simumalloc -d -t 500 -s 512 -l 20 Sbrked 24594, MaxAlloced 20220, Wastage 0.18 + ./simumalloc -d -t 500 -s 512 -l 500 Sbrked 307425, MaxAlloced 268332, Wastage 0.13 + ./simumalloc -d -t 500 -s 512 -a Sbrked 532870, MaxAlloced 525660, Wastage 0.01 3.662s real 2.299s user 0.784s system 84% ./regress malloc-2.5.3b + ./simumalloc -d -t 2000 -s 1024 -l 2000 Sbrked 2244608, MaxAlloced 2104624, Wastage 0.06 + ./simumalloc -t 15000 -s 1024 -l 2000 Sbrked 2392064, MaxAlloced 2168296, Wastage 0.09 + ./simumalloc -d -t 5000 -s 512 -l 20 Sbrked 32768, MaxAlloced 21612, Wastage 0.34 + ./simumalloc -T trace -t 5000 -s 512 -l 20 ./simumalloc: -T option needs CSRI malloc + ./simumalloc -d -t 500 -s 512 -l 20 Sbrked 24576, MaxAlloced 20220, Wastage 0.18 + ./simumalloc -d -t 500 -s 512 -l 500 Sbrked 303104, MaxAlloced 268332, Wastage 0.11 + ./simumalloc -d -t 500 -s 512 -a Sbrked 532480, MaxAlloced 525660, Wastage 0.01 1.570s real 0.647s user 0.754s system 89% ./regress