Date: Sun, 20 Sep 1998 22:34:38 -0700 (PDT) From: Matthew Dillon <dillon@backplane.com> To: Mike Smith <mike@smith.net.au> Cc: Doug Rabson <dfr@FreeBSD.ORG>, cvs-committers@FreeBSD.ORG, cvs-all@FreeBSD.ORG Subject: Re: cvs commit: src/lib/libstand ufs.c Message-ID: <199809210534.WAA00772@apollo.backplane.com> References: <199809210246.TAA02481@word.smith.net.au>
next in thread | previous in thread | raw e-mail | index | archive | help
:If anyone has a generic malloc implementation that optimises for minimum :heap usage, robustness and small code footprint that we can use, I'd :love to see it. The Mach-derived allocator we're currently using is :snot, and I'd dearly love to kill it. : :The allocation profile of the bootstrap is moderately messy; there's a :lot of small object allocation/freeing (mostly strings containing :pathnames, etc.) and some large objects (eg. UFS blocks, ether :datagrams, etc.) of disparate sizes. I have written several. I've put the current incarnation of my allocator up on the web: http://www.backplane.com/FreeSrc/ I ripped it out of one of my embedded projects. It's a very nice and *simple* allocator. It uses a pool/free-list approach so allocations have no memory overhead... you can allocate the entire pool as useable memory. You must specify the number of bytes previously allocated when you free an object. It has other features, too, including the ability to guarentee alignment (i.e. if all your low-level allocations are a power-of-2-sized). The allocation granularity is 8 bytes on a 32 bit system, 16 bytes on a 64 bit system. This code uses a simple linked list of free blocks, so fragmentation can be a problem if you are allocating lots of differently-sized objects from the pool. If you create too many holes, the zfree() function slows down a bit. The pool is designed to be the low-level allocator used by a higher-level allocator but can easily be used as the only allocator in the system. It is designed so, typically, several memory pools are used by a project recursively embedded in each other and/or separate from each other. Due to the overheadless nature of the allocator, you can do cool things. For example, you can initialize two (or more) memory pools that cover the *same* address range (use the same backing store). Pools start out with the entire buffer marked as allocated (no free memory available). You start by deallocating the entire buffer back into the first pool, making it available for allocation in the first pool. You then allocate pages out of the first pool and deallocate them into the second pool, making a portion of the first pool available in the second pool. The first pool is used to allocate page-sized chunks while the second pool is used to allocate smaller chunks. Etc... -Matt Matthew Dillon Engineering, HiWay Technologies, Inc. & BEST Internet Communications & God knows what else. <dillon@backplane.com> (Please include original email in any response)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199809210534.WAA00772>