From owner-p4-projects@FreeBSD.ORG Sun Feb 10 08:06:23 2008 Return-Path: Delivered-To: p4-projects@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 32767) id 85C7516A46C; Sun, 10 Feb 2008 08:06:23 +0000 (UTC) Delivered-To: perforce@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4A4EE16A469 for ; Sun, 10 Feb 2008 08:06:23 +0000 (UTC) (envelope-from sephe@FreeBSD.org) Received: from repoman.freebsd.org (repoman.freebsd.org [IPv6:2001:4f8:fff6::29]) by mx1.freebsd.org (Postfix) with ESMTP id 3118D13C461 for ; Sun, 10 Feb 2008 08:06:23 +0000 (UTC) (envelope-from sephe@FreeBSD.org) Received: from repoman.freebsd.org (localhost [127.0.0.1]) by repoman.freebsd.org (8.14.1/8.14.1) with ESMTP id m1A86NnG069496 for ; Sun, 10 Feb 2008 08:06:23 GMT (envelope-from sephe@FreeBSD.org) Received: (from perforce@localhost) by repoman.freebsd.org (8.14.1/8.14.1/Submit) id m1A86L3w069493 for perforce@freebsd.org; Sun, 10 Feb 2008 08:06:21 GMT (envelope-from sephe@FreeBSD.org) Date: Sun, 10 Feb 2008 08:06:21 GMT Message-Id: <200802100806.m1A86L3w069493@repoman.freebsd.org> X-Authentication-Warning: repoman.freebsd.org: perforce set sender to sephe@FreeBSD.org using -f From: Sepherosa Ziehau To: Perforce Change Reviews Cc: Subject: PERFORCE change 135137 for review X-BeenThere: p4-projects@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: p4 projects tree changes List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 10 Feb 2008 08:06:23 -0000 http://perforce.freebsd.org/chv.cgi?CH=135137 Change 135137 by sephe@sephe_enigma:sam_wifi on 2008/02/10 08:06:11 IFC Affected files ... .. //depot/projects/wifi/ObsoleteFiles.inc#26 integrate .. //depot/projects/wifi/UPDATING#38 integrate .. //depot/projects/wifi/bin/date/date.c#3 integrate .. //depot/projects/wifi/contrib/openpam/FREEBSD-vendor#1 branch .. //depot/projects/wifi/crypto/openssh/FREEBSD-Xlist#2 integrate .. //depot/projects/wifi/crypto/openssh/FREEBSD-upgrade#6 integrate .. //depot/projects/wifi/crypto/openssh/FREEBSD-vendor#1 branch .. //depot/projects/wifi/etc/namedb/named.root#3 integrate .. //depot/projects/wifi/include/pthread_np.h#5 integrate .. //depot/projects/wifi/lib/libc/include/namespace.h#4 integrate .. //depot/projects/wifi/lib/libc/include/un-namespace.h#4 integrate .. //depot/projects/wifi/lib/libc/stdlib/malloc.c#12 integrate .. //depot/projects/wifi/lib/libfetch/common.c#4 integrate .. //depot/projects/wifi/lib/libfetch/fetch.3#7 integrate .. //depot/projects/wifi/lib/libfetch/ftp.c#8 integrate .. //depot/projects/wifi/lib/libfetch/http.c#8 integrate .. //depot/projects/wifi/lib/libkse/Makefile#2 integrate .. //depot/projects/wifi/lib/libkse/kse.map#2 integrate .. //depot/projects/wifi/lib/libkse/thread/thr_mutex.c#2 integrate .. //depot/projects/wifi/lib/libthr/Makefile#17 integrate .. //depot/projects/wifi/lib/libthr/pthread.map#12 integrate .. //depot/projects/wifi/lib/libthr/thread/thr_mutex.c#11 integrate .. //depot/projects/wifi/lib/msun/ld128/s_exp2l.c#2 integrate .. //depot/projects/wifi/lib/msun/ld80/s_exp2l.c#2 integrate .. //depot/projects/wifi/lib/msun/src/e_exp.c#3 integrate .. //depot/projects/wifi/lib/msun/src/e_expf.c#7 integrate .. //depot/projects/wifi/lib/msun/src/s_exp2.c#3 integrate .. //depot/projects/wifi/lib/msun/src/s_exp2f.c#3 integrate .. //depot/projects/wifi/lib/msun/src/s_expm1.c#2 integrate .. //depot/projects/wifi/lib/msun/src/s_expm1f.c#2 integrate .. //depot/projects/wifi/lib/msun/src/s_logb.c#4 integrate .. //depot/projects/wifi/lib/msun/src/s_truncl.c#3 integrate .. //depot/projects/wifi/sbin/ipfw/ipfw.8#25 integrate .. //depot/projects/wifi/sbin/md5/md5.c#4 integrate .. //depot/projects/wifi/share/man/man4/ciss.4#5 integrate .. //depot/projects/wifi/sys/boot/ofw/libofw/ofw_console.c#5 integrate .. //depot/projects/wifi/sys/dev/ciss/ciss.c#23 integrate .. //depot/projects/wifi/sys/fs/coda/cnode.h#3 integrate .. //depot/projects/wifi/sys/fs/coda/coda_fbsd.c#2 integrate .. //depot/projects/wifi/sys/fs/coda/coda_namecache.c#2 integrate .. //depot/projects/wifi/sys/fs/coda/coda_psdev.c#3 integrate .. //depot/projects/wifi/sys/fs/coda/coda_psdev.h#2 integrate .. //depot/projects/wifi/sys/fs/coda/coda_subr.c#2 integrate .. //depot/projects/wifi/sys/fs/coda/coda_venus.c#2 integrate .. //depot/projects/wifi/sys/fs/coda/coda_vfsops.c#5 integrate .. //depot/projects/wifi/sys/fs/coda/coda_vfsops.h#2 integrate .. //depot/projects/wifi/sys/fs/coda/coda_vnops.c#5 integrate .. //depot/projects/wifi/sys/fs/coda/coda_vnops.h#2 integrate .. //depot/projects/wifi/sys/fs/nullfs/null_vfsops.c#17 integrate .. //depot/projects/wifi/sys/kern/kern_descrip.c#32 integrate .. //depot/projects/wifi/sys/kern/kern_lock.c#24 integrate .. //depot/projects/wifi/sys/kern/kern_rwlock.c#14 integrate .. //depot/projects/wifi/sys/kern/subr_sleepqueue.c#18 integrate .. //depot/projects/wifi/sys/kern/subr_turnstile.c#15 integrate .. //depot/projects/wifi/sys/kern/uipc_shm.c#3 integrate .. //depot/projects/wifi/sys/kern/vfs_subr.c#47 integrate .. //depot/projects/wifi/sys/netgraph/netflow/netflow.c#16 integrate .. //depot/projects/wifi/sys/netgraph/ng_base.c#29 integrate .. //depot/projects/wifi/sys/netgraph/ng_ppp.c#14 integrate .. //depot/projects/wifi/sys/netgraph/ng_pppoe.c#13 integrate .. //depot/projects/wifi/sys/netinet/in_rmx.c#8 integrate .. //depot/projects/wifi/sys/netinet/ip_carp.c#20 integrate .. //depot/projects/wifi/sys/netinet/ip_id.c#5 integrate .. //depot/projects/wifi/sys/nfs4client/nfs4_vfsops.c#14 integrate .. //depot/projects/wifi/sys/nfsclient/nfs_bio.c#23 integrate .. //depot/projects/wifi/sys/nfsclient/nfs_subs.c#14 integrate .. //depot/projects/wifi/sys/nfsclient/nfs_vfsops.c#22 integrate .. //depot/projects/wifi/sys/nfsclient/nfsnode.h#13 integrate .. //depot/projects/wifi/sys/sys/lockmgr.h#13 integrate .. //depot/projects/wifi/sys/sys/param.h#36 integrate .. //depot/projects/wifi/sys/sys/proc.h#39 integrate .. //depot/projects/wifi/sys/sys/user.h#12 integrate .. //depot/projects/wifi/tools/regression/netinet/ip_id_period/ip_id_period.py#1 branch .. //depot/projects/wifi/tools/regression/pthread/mutex_islocked_np/Makefile#2 delete .. //depot/projects/wifi/tools/regression/pthread/mutex_islocked_np/mutex_islocked_np.c#2 delete .. //depot/projects/wifi/tools/regression/pthread/mutex_isowned_np/Makefile#1 branch .. //depot/projects/wifi/tools/regression/pthread/mutex_isowned_np/mutex_isowned_np.c#1 branch .. //depot/projects/wifi/usr.bin/gzip/znew#2 integrate .. //depot/projects/wifi/usr.bin/ministat/ministat.c#2 integrate .. //depot/projects/wifi/usr.bin/netstat/netstat.h#13 integrate .. //depot/projects/wifi/usr.bin/netstat/route.c#8 integrate .. //depot/projects/wifi/usr.bin/procstat/procstat.c#2 integrate .. //depot/projects/wifi/usr.bin/procstat/procstat_basic.c#2 integrate .. //depot/projects/wifi/usr.bin/procstat/procstat_files.c#3 integrate .. //depot/projects/wifi/usr.bin/procstat/procstat_kstack.c#2 integrate .. //depot/projects/wifi/usr.bin/procstat/procstat_threads.c#2 integrate .. //depot/projects/wifi/usr.bin/procstat/procstat_vm.c#2 integrate .. //depot/projects/wifi/usr.bin/sed/compile.c#5 integrate .. //depot/projects/wifi/usr.bin/sed/defs.h#3 integrate .. //depot/projects/wifi/usr.bin/sed/main.c#8 integrate .. //depot/projects/wifi/usr.bin/sed/process.c#7 integrate .. //depot/projects/wifi/usr.bin/uniq/uniq.c#3 integrate .. //depot/projects/wifi/usr.sbin/bootparamd/Makefile#2 integrate .. //depot/projects/wifi/usr.sbin/bootparamd/Makefile.inc#2 integrate .. //depot/projects/wifi/usr.sbin/pkg_install/add/pkg_add.1#7 integrate Differences ... ==== //depot/projects/wifi/ObsoleteFiles.inc#26 (text+ko) ==== @@ -1,5 +1,5 @@ # -# $FreeBSD: src/ObsoleteFiles.inc,v 1.127 2008/01/26 20:23:25 brueffer Exp $ +# $FreeBSD: src/ObsoleteFiles.inc,v 1.130 2008/02/06 19:45:25 delphij Exp $ # # This file lists old files (OLD_FILES), libraries (OLD_LIBS) and # directories (OLD_DIRS) which should get removed at an update. Recently @@ -3945,7 +3945,9 @@ .if ${TARGET_ARCH} != "i386" && ${TARGET_ARCH} != "amd64" OLD_FILES+=usr/share/man/man8/boot_i386.8.gz .endif +.if ${TARGET_ARCH} != "powerpc" && ${TARGET_ARCH} != "sparc64" OLD_FILES+=usr/share/man/man8/ofwdump.8.gz +.endif OLD_FILES+=usr/share/man/man8/mount_reiserfs.8.gz OLD_FILES+=usr/share/man/man9/VFS_START.9.gz OLD_FILES+=usr/share/man/man9/cpu_critical_exit.9.gz ==== //depot/projects/wifi/UPDATING#38 (text+ko) ==== @@ -22,6 +22,10 @@ to maximize performance. (To disable malloc debugging, run ln -s aj /etc/malloc.conf.) +20080208: + Belatedly note the addition of m_collapse for compacting + mbuf chains. + 20080126: The fts(3) structures have been changed to use adequate integer types for their members and so to be able to cope @@ -969,4 +973,4 @@ Contact Warner Losh if you have any questions about your use of this document. -$FreeBSD: src/UPDATING,v 1.517 2008/01/26 17:09:39 yar Exp $ +$FreeBSD: src/UPDATING,v 1.518 2008/02/08 21:24:58 sam Exp $ ==== //depot/projects/wifi/bin/date/date.c#3 (text+ko) ==== @@ -40,7 +40,7 @@ #endif #include -__FBSDID("$FreeBSD: src/bin/date/date.c,v 1.47 2005/01/10 08:39:21 imp Exp $"); +__FBSDID("$FreeBSD: src/bin/date/date.c,v 1.48 2008/02/07 16:04:24 ru Exp $"); #include #include @@ -186,8 +186,10 @@ const char *dot, *t; int century; + lt = localtime(&tval); + lt->tm_isdst = -1; /* divine correct DST */ + if (fmt != NULL) { - lt = localtime(&tval); t = strptime(p, fmt, lt); if (t == NULL) { fprintf(stderr, "Failed conversion of ``%s''" @@ -208,8 +210,6 @@ badformat(); } - lt = localtime(&tval); - if (dot != NULL) { /* .ss */ dot++; /* *dot++ = '\0'; */ if (strlen(dot) != 2) @@ -264,9 +264,6 @@ } } - /* Let mktime() decide whether summer time is in effect. */ - lt->tm_isdst = -1; - /* convert broken-down time to GMT clock time */ if ((tval = mktime(lt)) == -1) errx(1, "nonexistent time"); ==== //depot/projects/wifi/crypto/openssh/FREEBSD-Xlist#2 (text+ko) ==== @@ -1,10 +1,9 @@ -$FreeBSD: src/crypto/openssh/FREEBSD-Xlist,v 1.3 2004/02/26 10:37:34 des Exp $ +$FreeBSD: src/crypto/openssh/FREEBSD-Xlist,v 1.4 2008/02/06 23:14:24 des Exp $ *.0 */.cvsignore -.cvsignore -autom4te* -config.h.in -configure -contrib -regress/*.[0-9] -stamp-h.in +*autom4te* +*config.h.in +*configure +*contrib +*regress/*.[0-9] +*stamp-h.in ==== //depot/projects/wifi/crypto/openssh/FREEBSD-upgrade#6 (text+ko) ==== @@ -12,12 +12,12 @@ 2) Unpack the tarball in a suitable directory. + $ tar xf openssh-X.YpZ.tar.gz \ + -X /usr/src/crypto/openssh/FREEBSD-Xlist + 3) Remove trash: - $ sh -c 'while read glob ; do rm -rvf $glob ; done' \ - -__FBSDID("$FreeBSD: src/lib/libc/stdlib/malloc.c,v 1.162 2008/02/06 02:59:54 jasone Exp $"); +__FBSDID("$FreeBSD: src/lib/libc/stdlib/malloc.c,v 1.164 2008/02/08 08:02:34 jasone Exp $"); #include "libc_private.h" #ifdef MALLOC_DEBUG @@ -315,7 +315,8 @@ * trials (each deallocation is a trial), so the actual average threshold * for clearing the cache is somewhat lower. */ -# define LAZY_FREE_NPROBES 5 +# define LAZY_FREE_NPROBES_2POW_MIN 2 +# define LAZY_FREE_NPROBES_2POW_MAX 3 #endif /* @@ -929,30 +930,24 @@ static void *arena_palloc(arena_t *arena, size_t alignment, size_t size, size_t alloc_size); static size_t arena_salloc(const void *ptr); +#ifdef MALLOC_LAZY_FREE +static void arena_dalloc_lazy_hard(arena_t *arena, arena_chunk_t *chunk, + void *ptr, size_t pageind, arena_chunk_map_t *mapelm, unsigned slot); +#endif +static void arena_dalloc_large(arena_t *arena, arena_chunk_t *chunk, + void *ptr); static void arena_ralloc_resize_shrink(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t size, size_t oldsize); static bool arena_ralloc_resize_grow(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t size, size_t oldsize); static bool arena_ralloc_resize(void *ptr, size_t size, size_t oldsize); static void *arena_ralloc(void *ptr, size_t size, size_t oldsize); -#ifdef MALLOC_LAZY_FREE -static void arena_dalloc_lazy_hard(arena_t *arena, arena_chunk_t *chunk, - void *ptr, size_t pageind, arena_chunk_map_t *mapelm); -#endif -static void arena_dalloc_large(arena_t *arena, arena_chunk_t *chunk, - void *ptr); static bool arena_new(arena_t *arena); static arena_t *arenas_extend(unsigned ind); static void *huge_malloc(size_t size, bool zero); static void *huge_palloc(size_t alignment, size_t size); static void *huge_ralloc(void *ptr, size_t size, size_t oldsize); static void huge_dalloc(void *ptr); -static void *imalloc(size_t size); -static void *ipalloc(size_t alignment, size_t size); -static void *icalloc(size_t size); -static size_t isalloc(const void *ptr); -static void *iralloc(void *ptr, size_t size); -static void idalloc(void *ptr); static void malloc_print_stats(void); static bool malloc_init_hard(void); @@ -2312,6 +2307,7 @@ == 0) { memset((void *)((uintptr_t)chunk + ((run_ind + i) << pagesize_2pow)), 0, pagesize); + /* CHUNK_MAP_UNTOUCHED is cleared below. */ } } @@ -2379,6 +2375,8 @@ * Initialize the map to contain one maximal free untouched * run. */ + memset(chunk->map, (CHUNK_MAP_LARGE | CHUNK_MAP_POS_MASK), + arena_chunk_header_npages); memset(&chunk->map[arena_chunk_header_npages], CHUNK_MAP_UNTOUCHED, (chunk_npages - arena_chunk_header_npages)); @@ -2498,7 +2496,8 @@ if (chunk->map[i] & CHUNK_MAP_DIRTY) { size_t npages; - chunk->map[i] = 0; + chunk->map[i] = (CHUNK_MAP_LARGE | + CHUNK_MAP_POS_MASK); chunk->ndirty--; arena->ndirty--; /* Find adjacent dirty run(s). */ @@ -2507,7 +2506,8 @@ (chunk->map[i - 1] & CHUNK_MAP_DIRTY); npages++) { i--; - chunk->map[i] = 0; + chunk->map[i] = (CHUNK_MAP_LARGE + | CHUNK_MAP_POS_MASK); chunk->ndirty--; arena->ndirty--; } @@ -2556,7 +2556,9 @@ size_t i; for (i = 0; i < run_pages; i++) { - chunk->map[run_ind + i] = CHUNK_MAP_DIRTY; + assert((chunk->map[run_ind + i] & CHUNK_MAP_DIRTY) == + 0); + chunk->map[run_ind + i] |= CHUNK_MAP_DIRTY; chunk->ndirty++; arena->ndirty++; } @@ -3005,6 +3007,28 @@ return (arena_malloc_large(arena, size, zero)); } +static inline void * +imalloc(size_t size) +{ + + assert(size != 0); + + if (size <= arena_maxclass) + return (arena_malloc(choose_arena(), size, false)); + else + return (huge_malloc(size, false)); +} + +static inline void * +icalloc(size_t size) +{ + + if (size <= arena_maxclass) + return (arena_malloc(choose_arena(), size, true)); + else + return (huge_malloc(size, true)); +} + /* Only handles large allocations that require more than page alignment. */ static void * arena_palloc(arena_t *arena, size_t alignment, size_t size, size_t alloc_size) @@ -3084,6 +3108,101 @@ return (ret); } +static inline void * +ipalloc(size_t alignment, size_t size) +{ + void *ret; + size_t ceil_size; + + /* + * Round size up to the nearest multiple of alignment. + * + * This done, we can take advantage of the fact that for each small + * size class, every object is aligned at the smallest power of two + * that is non-zero in the base two representation of the size. For + * example: + * + * Size | Base 2 | Minimum alignment + * -----+----------+------------------ + * 96 | 1100000 | 32 + * 144 | 10100000 | 32 + * 192 | 11000000 | 64 + * + * Depending on runtime settings, it is possible that arena_malloc() + * will further round up to a power of two, but that never causes + * correctness issues. + */ + ceil_size = (size + (alignment - 1)) & (-alignment); + /* + * (ceil_size < size) protects against the combination of maximal + * alignment and size greater than maximal alignment. + */ + if (ceil_size < size) { + /* size_t overflow. */ + return (NULL); + } + + if (ceil_size <= pagesize || (alignment <= pagesize + && ceil_size <= arena_maxclass)) + ret = arena_malloc(choose_arena(), ceil_size, false); + else { + size_t run_size; + + /* + * We can't achieve sub-page alignment, so round up alignment + * permanently; it makes later calculations simpler. + */ + alignment = PAGE_CEILING(alignment); + ceil_size = PAGE_CEILING(size); + /* + * (ceil_size < size) protects against very large sizes within + * pagesize of SIZE_T_MAX. + * + * (ceil_size + alignment < ceil_size) protects against the + * combination of maximal alignment and ceil_size large enough + * to cause overflow. This is similar to the first overflow + * check above, but it needs to be repeated due to the new + * ceil_size value, which may now be *equal* to maximal + * alignment, whereas before we only detected overflow if the + * original size was *greater* than maximal alignment. + */ + if (ceil_size < size || ceil_size + alignment < ceil_size) { + /* size_t overflow. */ + return (NULL); + } + + /* + * Calculate the size of the over-size run that arena_palloc() + * would need to allocate in order to guarantee the alignment. + */ + if (ceil_size >= alignment) + run_size = ceil_size + alignment - pagesize; + else { + /* + * It is possible that (alignment << 1) will cause + * overflow, but it doesn't matter because we also + * subtract pagesize, which in the case of overflow + * leaves us with a very large run_size. That causes + * the first conditional below to fail, which means + * that the bogus run_size value never gets used for + * anything important. + */ + run_size = (alignment << 1) - pagesize; + } + + if (run_size <= arena_maxclass) { + ret = arena_palloc(choose_arena(), alignment, ceil_size, + run_size); + } else if (alignment <= chunksize) + ret = huge_malloc(ceil_size, false); + else + ret = huge_palloc(alignment, ceil_size); + } + + assert(((uintptr_t)ret & (alignment - 1)) == 0); + return (ret); +} + /* Return the size of the allocation pointed to by ptr. */ static size_t arena_salloc(const void *ptr) @@ -3099,12 +3218,11 @@ chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); pageind = (((uintptr_t)ptr - (uintptr_t)chunk) >> pagesize_2pow); mapelm = chunk->map[pageind]; - if (mapelm != CHUNK_MAP_LARGE) { + if ((mapelm & CHUNK_MAP_LARGE) == 0) { arena_run_t *run; /* Small allocation size is in the run header. */ - assert(mapelm <= CHUNK_MAP_POS_MASK); - pageind -= mapelm; + pageind -= (mapelm & CHUNK_MAP_POS_MASK); run = (arena_run_t *)((uintptr_t)chunk + (pageind << pagesize_2pow)); assert(run->magic == ARENA_RUN_MAGIC); @@ -3127,166 +3245,38 @@ return (ret); } -static void -arena_ralloc_resize_shrink(arena_t *arena, arena_chunk_t *chunk, void *ptr, - size_t size, size_t oldsize) +static inline size_t +isalloc(const void *ptr) { - extent_node_t *node, key; + size_t ret; + arena_chunk_t *chunk; - assert(size < oldsize); - - /* - * Shrink the run, and make trailing pages available for other - * allocations. - */ - key.addr = (void *)((uintptr_t)ptr); -#ifdef MALLOC_BALANCE - arena_lock_balance(arena); -#else - malloc_spin_lock(&arena->lock); -#endif - node = RB_FIND(extent_tree_ad_s, &arena->runs_alloced_ad, &key); - assert(node != NULL); - arena_run_trim_tail(arena, chunk, node, (arena_run_t *)ptr, oldsize, - size, true); -#ifdef MALLOC_STATS - arena->stats.allocated_large -= oldsize - size; -#endif - malloc_spin_unlock(&arena->lock); -} - -static bool -arena_ralloc_resize_grow(arena_t *arena, arena_chunk_t *chunk, void *ptr, - size_t size, size_t oldsize) -{ - extent_node_t *nodeC, key; - - /* Try to extend the run. */ - assert(size > oldsize); - key.addr = (void *)((uintptr_t)ptr + oldsize); -#ifdef MALLOC_BALANCE - arena_lock_balance(arena); -#else - malloc_spin_lock(&arena->lock); -#endif - nodeC = RB_FIND(extent_tree_ad_s, &arena->runs_avail_ad, &key); - if (nodeC != NULL && oldsize + nodeC->size >= size) { - extent_node_t *nodeA, *nodeB; - - /* - * The next run is available and sufficiently large. Split the - * following run, then merge the first part with the existing - * allocation. This results in a bit more tree manipulation - * than absolutely necessary, but it substantially simplifies - * the code. - */ - arena_run_split(arena, (arena_run_t *)nodeC->addr, size - - oldsize, false, false); - - key.addr = ptr; - nodeA = RB_FIND(extent_tree_ad_s, &arena->runs_alloced_ad, - &key); - assert(nodeA != NULL); - - key.addr = (void *)((uintptr_t)ptr + oldsize); - nodeB = RB_FIND(extent_tree_ad_s, &arena->runs_alloced_ad, - &key); - assert(nodeB != NULL); - - nodeA->size += nodeB->size; - - RB_REMOVE(extent_tree_ad_s, &arena->runs_alloced_ad, nodeB); - arena_chunk_node_dealloc(chunk, nodeB); - -#ifdef MALLOC_STATS - arena->stats.allocated_large += size - oldsize; -#endif - malloc_spin_unlock(&arena->lock); - return (false); - } - malloc_spin_unlock(&arena->lock); - - return (true); -} + assert(ptr != NULL); -/* - * Try to resize a large allocation, in order to avoid copying. This will - * always fail if growing an object, and the following run is already in use. - */ -static bool -arena_ralloc_resize(void *ptr, size_t size, size_t oldsize) -{ - arena_chunk_t *chunk; - arena_t *arena; - chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); - arena = chunk->arena; - assert(arena->magic == ARENA_MAGIC); + if (chunk != ptr) { + /* Region. */ + assert(chunk->arena->magic == ARENA_MAGIC); - if (size < oldsize) { - arena_ralloc_resize_shrink(arena, chunk, ptr, size, oldsize); - return (false); + ret = arena_salloc(ptr); } else { - return (arena_ralloc_resize_grow(arena, chunk, ptr, size, - oldsize)); - } -} + extent_node_t *node, key; -static void * -arena_ralloc(void *ptr, size_t size, size_t oldsize) -{ - void *ret; + /* Chunk (huge allocation). */ - /* Try to avoid moving the allocation. */ - if (size < small_min) { - if (oldsize < small_min && - ffs((int)(pow2_ceil(size) >> (TINY_MIN_2POW + 1))) - == ffs((int)(pow2_ceil(oldsize) >> (TINY_MIN_2POW + 1)))) - goto IN_PLACE; /* Same size class. */ - } else if (size <= small_max) { - if (oldsize >= small_min && oldsize <= small_max && - (QUANTUM_CEILING(size) >> opt_quantum_2pow) - == (QUANTUM_CEILING(oldsize) >> opt_quantum_2pow)) - goto IN_PLACE; /* Same size class. */ - } else if (size <= bin_maxclass) { - if (oldsize > small_max && oldsize <= bin_maxclass && - pow2_ceil(size) == pow2_ceil(oldsize)) - goto IN_PLACE; /* Same size class. */ - } else if (oldsize > bin_maxclass && oldsize <= arena_maxclass) { - size_t psize; + malloc_mutex_lock(&huge_mtx); - assert(size > bin_maxclass); - psize = PAGE_CEILING(size); + /* Extract from tree of huge allocations. */ + key.addr = __DECONST(void *, ptr); + node = RB_FIND(extent_tree_ad_s, &huge, &key); + assert(node != NULL); - if (psize == oldsize) - goto IN_PLACE; /* Same size class. */ + ret = node->size; - if (arena_ralloc_resize(ptr, psize, oldsize) == false) - goto IN_PLACE; + malloc_mutex_unlock(&huge_mtx); } - /* - * If we get here, then size and oldsize are different enough that we - * need to move the object. In that case, fall back to allocating new - * space and copying. - */ - ret = arena_malloc(choose_arena(), size, false); - if (ret == NULL) - return (NULL); - - /* Junk/zero-filling were already done by arena_malloc(). */ - if (size < oldsize) - memcpy(ret, ptr, size); - else - memcpy(ret, ptr, oldsize); - idalloc(ptr); return (ret); -IN_PLACE: - if (opt_junk && size < oldsize) - memset((void *)((uintptr_t)ptr + size), 0x5a, oldsize - size); - else if (opt_zero && size > oldsize) - memset((void *)((uintptr_t)ptr + oldsize), 0, size - oldsize); - return (ptr); } static inline void @@ -3297,8 +3287,7 @@ arena_bin_t *bin; size_t size; - assert(mapelm <= CHUNK_MAP_POS_MASK); - pageind -= mapelm; + pageind -= (mapelm & CHUNK_MAP_POS_MASK); run = (arena_run_t *)((uintptr_t)chunk + (pageind << pagesize_2pow)); assert(run->magic == ARENA_RUN_MAGIC); @@ -3360,7 +3349,7 @@ size_t pageind, arena_chunk_map_t *mapelm) { void **free_cache = arena->free_cache; - unsigned i, slot; + unsigned i, nprobes, slot; if (__isthreaded == false || opt_lazy_free_2pow < 0) { malloc_spin_lock(&arena->lock); @@ -3369,7 +3358,9 @@ return; } - for (i = 0; i < LAZY_FREE_NPROBES; i++) { + nprobes = (1U << LAZY_FREE_NPROBES_2POW_MIN) + PRN(lazy_free, + (LAZY_FREE_NPROBES_2POW_MAX - LAZY_FREE_NPROBES_2POW_MIN)); + for (i = 0; i < nprobes; i++) { slot = PRN(lazy_free, opt_lazy_free_2pow); if (atomic_cmpset_ptr((uintptr_t *)&free_cache[slot], (uintptr_t)NULL, (uintptr_t)ptr)) { @@ -3377,15 +3368,15 @@ } } - arena_dalloc_lazy_hard(arena, chunk, ptr, pageind, mapelm); + arena_dalloc_lazy_hard(arena, chunk, ptr, pageind, mapelm, slot); } static void arena_dalloc_lazy_hard(arena_t *arena, arena_chunk_t *chunk, void *ptr, - size_t pageind, arena_chunk_map_t *mapelm) + size_t pageind, arena_chunk_map_t *mapelm, unsigned slot) { void **free_cache = arena->free_cache; - unsigned i, slot; + unsigned i; malloc_spin_lock(&arena->lock); arena_dalloc_small(arena, chunk, ptr, pageind, *mapelm); @@ -3486,9 +3477,8 @@ pageind = (((uintptr_t)ptr - (uintptr_t)chunk) >> pagesize_2pow); mapelm = &chunk->map[pageind]; - if (*mapelm != CHUNK_MAP_LARGE) { + if ((*mapelm & CHUNK_MAP_LARGE) == 0) { /* Small allocation. */ - assert(*mapelm <= CHUNK_MAP_POS_MASK); #ifdef MALLOC_LAZY_FREE arena_dalloc_lazy(arena, chunk, ptr, pageind, mapelm); #else @@ -3502,6 +3492,197 @@ } } +static inline void +idalloc(void *ptr) +{ + arena_chunk_t *chunk; + + assert(ptr != NULL); + + chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); + if (chunk != ptr) + arena_dalloc(chunk->arena, chunk, ptr); + else + huge_dalloc(ptr); +} + +static void +arena_ralloc_resize_shrink(arena_t *arena, arena_chunk_t *chunk, void *ptr, + size_t size, size_t oldsize) +{ + extent_node_t *node, key; + + assert(size < oldsize); + + /* + * Shrink the run, and make trailing pages available for other + * allocations. + */ + key.addr = (void *)((uintptr_t)ptr); +#ifdef MALLOC_BALANCE + arena_lock_balance(arena); +#else + malloc_spin_lock(&arena->lock); +#endif + node = RB_FIND(extent_tree_ad_s, &arena->runs_alloced_ad, &key); + assert(node != NULL); + arena_run_trim_tail(arena, chunk, node, (arena_run_t *)ptr, oldsize, + size, true); +#ifdef MALLOC_STATS + arena->stats.allocated_large -= oldsize - size; +#endif + malloc_spin_unlock(&arena->lock); +} + +static bool +arena_ralloc_resize_grow(arena_t *arena, arena_chunk_t *chunk, void *ptr, + size_t size, size_t oldsize) +{ + extent_node_t *nodeC, key; + + /* Try to extend the run. */ + assert(size > oldsize); + key.addr = (void *)((uintptr_t)ptr + oldsize); +#ifdef MALLOC_BALANCE + arena_lock_balance(arena); +#else + malloc_spin_lock(&arena->lock); +#endif + nodeC = RB_FIND(extent_tree_ad_s, &arena->runs_avail_ad, &key); + if (nodeC != NULL && oldsize + nodeC->size >= size) { + extent_node_t *nodeA, *nodeB; + + /* + * The next run is available and sufficiently large. Split the + * following run, then merge the first part with the existing + * allocation. This results in a bit more tree manipulation + * than absolutely necessary, but it substantially simplifies + * the code. + */ + arena_run_split(arena, (arena_run_t *)nodeC->addr, size - + oldsize, false, false); + + key.addr = ptr; + nodeA = RB_FIND(extent_tree_ad_s, &arena->runs_alloced_ad, + &key); + assert(nodeA != NULL); + + key.addr = (void *)((uintptr_t)ptr + oldsize); + nodeB = RB_FIND(extent_tree_ad_s, &arena->runs_alloced_ad, + &key); + assert(nodeB != NULL); + + nodeA->size += nodeB->size; + + RB_REMOVE(extent_tree_ad_s, &arena->runs_alloced_ad, nodeB); + arena_chunk_node_dealloc(chunk, nodeB); + +#ifdef MALLOC_STATS + arena->stats.allocated_large += size - oldsize; +#endif + malloc_spin_unlock(&arena->lock); + return (false); + } + malloc_spin_unlock(&arena->lock); + + return (true); +} + +/* + * Try to resize a large allocation, in order to avoid copying. This will + * always fail if growing an object, and the following run is already in use. + */ +static bool +arena_ralloc_resize(void *ptr, size_t size, size_t oldsize) +{ + arena_chunk_t *chunk; + arena_t *arena; + + chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); + arena = chunk->arena; + assert(arena->magic == ARENA_MAGIC); + + if (size < oldsize) { + arena_ralloc_resize_shrink(arena, chunk, ptr, size, oldsize); + return (false); + } else { + return (arena_ralloc_resize_grow(arena, chunk, ptr, size, + oldsize)); + } +} + +static void * +arena_ralloc(void *ptr, size_t size, size_t oldsize) +{ + void *ret; + size_t copysize; + + /* Try to avoid moving the allocation. */ + if (size < small_min) { + if (oldsize < small_min && + ffs((int)(pow2_ceil(size) >> (TINY_MIN_2POW + 1))) + == ffs((int)(pow2_ceil(oldsize) >> (TINY_MIN_2POW + 1)))) + goto IN_PLACE; /* Same size class. */ + } else if (size <= small_max) { + if (oldsize >= small_min && oldsize <= small_max && + (QUANTUM_CEILING(size) >> opt_quantum_2pow) + == (QUANTUM_CEILING(oldsize) >> opt_quantum_2pow)) + goto IN_PLACE; /* Same size class. */ + } else if (size <= bin_maxclass) { + if (oldsize > small_max && oldsize <= bin_maxclass && + pow2_ceil(size) == pow2_ceil(oldsize)) + goto IN_PLACE; /* Same size class. */ >>> TRUNCATED FOR MAIL (1000 lines) <<<