Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 Sep 2015 03:02:19 +0000 (UTC)
From:      Jason Evans <jasone@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r288090 - in head/contrib/jemalloc: . doc include/jemalloc include/jemalloc/internal src
Message-ID:  <201509220302.t8M32JVj092954@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: jasone
Date: Tue Sep 22 03:02:18 2015
New Revision: 288090
URL: https://svnweb.freebsd.org/changeset/base/288090

Log:
  Update jemalloc to 4.0.2.

Modified:
  head/contrib/jemalloc/ChangeLog
  head/contrib/jemalloc/FREEBSD-diffs
  head/contrib/jemalloc/VERSION
  head/contrib/jemalloc/doc/jemalloc.3
  head/contrib/jemalloc/include/jemalloc/internal/arena.h
  head/contrib/jemalloc/include/jemalloc/internal/huge.h
  head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h
  head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h
  head/contrib/jemalloc/include/jemalloc/internal/prof.h
  head/contrib/jemalloc/include/jemalloc/internal/size_classes.h
  head/contrib/jemalloc/include/jemalloc/internal/tcache.h
  head/contrib/jemalloc/include/jemalloc/internal/tsd.h
  head/contrib/jemalloc/include/jemalloc/jemalloc.h
  head/contrib/jemalloc/src/arena.c
  head/contrib/jemalloc/src/chunk_dss.c
  head/contrib/jemalloc/src/chunk_mmap.c
  head/contrib/jemalloc/src/huge.c
  head/contrib/jemalloc/src/jemalloc.c
  head/contrib/jemalloc/src/prof.c
  head/contrib/jemalloc/src/tcache.c

Modified: head/contrib/jemalloc/ChangeLog
==============================================================================
--- head/contrib/jemalloc/ChangeLog	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/ChangeLog	Tue Sep 22 03:02:18 2015	(r288090)
@@ -4,6 +4,76 @@ brevity.  Much more detail can be found 
 
     https://github.com/jemalloc/jemalloc
 
+* 4.0.2 (September 21, 2015)
+
+  This bugfix release addresses a few bugs specific to heap profiling.
+
+  Bug fixes:
+  - Fix ixallocx_prof_sample() to never modify nor create sampled small
+    allocations.  xallocx() is in general incapable of moving small allocations,
+    so this fix removes buggy code without loss of generality.
+  - Fix irallocx_prof_sample() to always allocate large regions, even when
+    alignment is non-zero.
+  - Fix prof_alloc_rollback() to read tdata from thread-specific data rather
+    than dereferencing a potentially invalid tctx.
+
+* 4.0.1 (September 15, 2015)
+
+  This is a bugfix release that is somewhat high risk due to the amount of
+  refactoring required to address deep xallocx() problems.  As a side effect of
+  these fixes, xallocx() now tries harder to partially fulfill requests for
+  optional extra space.  Note that a couple of minor heap profiling
+  optimizations are included, but these are better thought of as performance
+  fixes that were integral to disovering most of the other bugs.
+
+  Optimizations:
+  - Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the
+    fast path when heap profiling is enabled.  Additionally, split a special
+    case out into arena_prof_tctx_reset(), which also avoids chunk metadata
+    reads.
+  - Optimize irallocx_prof() to optimistically update the sampler state.  The
+    prior implementation appears to have been a holdover from when
+    rallocx()/xallocx() functionality was combined as rallocm().
+
+  Bug fixes:
+  - Fix TLS configuration such that it is enabled by default for platforms on
+    which it works correctly.
+  - Fix arenas_cache_cleanup() and arena_get_hard() to handle
+    allocation/deallocation within the application's thread-specific data
+    cleanup functions even after arenas_cache is torn down.
+  - Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.
+  - Fix chunk purge hook calls for in-place huge shrinking reallocation to
+    specify the old chunk size rather than the new chunk size.  This bug caused
+    no correctness issues for the default chunk purge function, but was
+    visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl.
+  - Fix heap profiling bugs:
+    + Fix heap profiling to distinguish among otherwise identical sample sites
+      with interposed resets (triggered via the "prof.reset" mallctl).  This bug
+      could cause data structure corruption that would most likely result in a
+      segfault.
+    + Fix irealloc_prof() to prof_alloc_rollback() on OOM.
+    + Make one call to prof_active_get_unlocked() per allocation event, and use
+      the result throughout the relevant functions that handle an allocation
+      event.  Also add a missing check in prof_realloc().  These fixes protect
+      allocation events against concurrent prof_active changes.
+    + Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()
+      in the correct order.
+    + Fix prof_realloc() to call prof_free_sampled_object() after calling
+      prof_malloc_sample_object().  Prior to this fix, if tctx and old_tctx were
+      the same, the tctx could have been prematurely destroyed.
+  - Fix portability bugs:
+    + Don't bitshift by negative amounts when encoding/decoding run sizes in
+      chunk header maps.  This affected systems with page sizes greater than 8
+      KiB.
+    + Rename index_t to szind_t to avoid an existing type on Solaris.
+    + Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
+      match glibc and avoid compilation errors when including both
+      jemalloc/jemalloc.h and malloc.h in C++ code.
+    + Don't assume that /bin/sh is appropriate when running size_classes.sh
+      during configuration.
+    + Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.
+    + Link tests to librt if it contains clock_gettime(2).
+
 * 4.0.0 (August 17, 2015)
 
   This version contains many speed and space optimizations, both minor and

Modified: head/contrib/jemalloc/FREEBSD-diffs
==============================================================================
--- head/contrib/jemalloc/FREEBSD-diffs	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/FREEBSD-diffs	Tue Sep 22 03:02:18 2015	(r288090)
@@ -47,7 +47,7 @@ index 8fc774b..fdbef95 100644
 +  </refsect1>
  </refentry>
 diff --git a/include/jemalloc/internal/jemalloc_internal.h.in b/include/jemalloc/internal/jemalloc_internal.h.in
-index 7a137b6..b0001e9 100644
+index 8536a3e..0c2a81f 100644
 --- a/include/jemalloc/internal/jemalloc_internal.h.in
 +++ b/include/jemalloc/internal/jemalloc_internal.h.in
 @@ -8,6 +8,9 @@
@@ -111,10 +111,10 @@ index f051f29..561378f 100644
  
  #endif /* JEMALLOC_H_EXTERNS */
 diff --git a/include/jemalloc/internal/private_symbols.txt b/include/jemalloc/internal/private_symbols.txt
-index dbf6aa7..f87dba8 100644
+index a90021a..34904bf 100644
 --- a/include/jemalloc/internal/private_symbols.txt
 +++ b/include/jemalloc/internal/private_symbols.txt
-@@ -277,7 +277,6 @@ iralloct_realign
+@@ -280,7 +280,6 @@ iralloct_realign
  isalloc
  isdalloct
  isqalloc
@@ -282,7 +282,7 @@ index f943891..47d032c 100755
 +#include "jemalloc_FreeBSD.h"
  EOF
 diff --git a/src/jemalloc.c b/src/jemalloc.c
-index ed7863b..d078a1f 100644
+index 5a2d324..b6cbb79 100644
 --- a/src/jemalloc.c
 +++ b/src/jemalloc.c
 @@ -4,6 +4,10 @@
@@ -296,7 +296,7 @@ index ed7863b..d078a1f 100644
  /* Runtime configuration options. */
  const char	*je_malloc_conf JEMALLOC_ATTR(weak);
  bool	opt_abort =
-@@ -2475,6 +2479,107 @@ je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr)
+@@ -2490,6 +2494,107 @@ je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr)
   */
  /******************************************************************************/
  /*
@@ -404,7 +404,7 @@ index ed7863b..d078a1f 100644
   * The following functions are used by threading libraries for protection of
   * malloc during fork().
   */
-@@ -2575,4 +2680,11 @@ jemalloc_postfork_child(void)
+@@ -2590,4 +2695,11 @@ jemalloc_postfork_child(void)
  	ctl_postfork_child();
  }
  

Modified: head/contrib/jemalloc/VERSION
==============================================================================
--- head/contrib/jemalloc/VERSION	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/VERSION	Tue Sep 22 03:02:18 2015	(r288090)
@@ -1 +1 @@
-4.0.0-0-g6e98caf8f064482b9ab292ef3638dea67420bbc2
+4.0.2-0-g486d249fb4715fd3de679b6c2a04f7e657883111

Modified: head/contrib/jemalloc/doc/jemalloc.3
==============================================================================
--- head/contrib/jemalloc/doc/jemalloc.3	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/doc/jemalloc.3	Tue Sep 22 03:02:18 2015	(r288090)
@@ -2,12 +2,12 @@
 .\"     Title: JEMALLOC
 .\"    Author: Jason Evans
 .\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>;
-.\"      Date: 08/18/2015
+.\"      Date: 09/21/2015
 .\"    Manual: User Manual
-.\"    Source: jemalloc 4.0.0-0-g6e98caf8f064482b9ab292ef3638dea67420bbc2
+.\"    Source: jemalloc 4.0.2-0-g486d249fb4715fd3de679b6c2a04f7e657883111
 .\"  Language: English
 .\"
-.TH "JEMALLOC" "3" "08/18/2015" "jemalloc 4.0.0-0-g6e98caf8f064" "User Manual"
+.TH "JEMALLOC" "3" "09/21/2015" "jemalloc 4.0.2-0-g486d249fb471" "User Manual"
 .\" -----------------------------------------------------------------
 .\" * Define some portability stuff
 .\" -----------------------------------------------------------------
@@ -31,7 +31,7 @@
 jemalloc \- general purpose memory allocation functions
 .SH "LIBRARY"
 .PP
-This manual describes jemalloc 4\&.0\&.0\-0\-g6e98caf8f064482b9ab292ef3638dea67420bbc2\&. More information can be found at the
+This manual describes jemalloc 4\&.0\&.2\-0\-g486d249fb4715fd3de679b6c2a04f7e657883111\&. More information can be found at the
 \m[blue]\fBjemalloc website\fR\m[]\&\s-2\u[1]\d\s+2\&.
 .PP
 The following configuration options are enabled in libc\*(Aqs built\-in jemalloc:

Modified: head/contrib/jemalloc/include/jemalloc/internal/arena.h
==============================================================================
--- head/contrib/jemalloc/include/jemalloc/internal/arena.h	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/include/jemalloc/internal/arena.h	Tue Sep 22 03:02:18 2015	(r288090)
@@ -39,7 +39,7 @@ typedef struct arena_s arena_t;
 #ifdef JEMALLOC_ARENA_STRUCTS_A
 struct arena_run_s {
 	/* Index of bin this run is associated with. */
-	index_t		binind;
+	szind_t		binind;
 
 	/* Number of free regions in run. */
 	unsigned	nfree;
@@ -424,7 +424,7 @@ extern arena_bin_info_t	arena_bin_info[N
 extern size_t		map_bias; /* Number of arena chunk header pages. */
 extern size_t		map_misc_offset;
 extern size_t		arena_maxrun; /* Max run size for arenas. */
-extern size_t		arena_maxclass; /* Max size class for arenas. */
+extern size_t		large_maxclass; /* Max large size class. */
 extern unsigned		nlclasses; /* Number of large size classes. */
 extern unsigned		nhclasses; /* Number of huge size classes. */
 
@@ -448,7 +448,7 @@ bool	arena_lg_dirty_mult_set(arena_t *ar
 void	arena_maybe_purge(arena_t *arena);
 void	arena_purge_all(arena_t *arena);
 void	arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin,
-    index_t binind, uint64_t prof_accumbytes);
+    szind_t binind, uint64_t prof_accumbytes);
 void	arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info,
     bool zero);
 #ifdef JEMALLOC_JET
@@ -488,7 +488,7 @@ extern arena_ralloc_junk_large_t *arena_
 bool	arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size,
     size_t extra, bool zero);
 void	*arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
-    size_t size, size_t extra, size_t alignment, bool zero, tcache_t *tcache);
+    size_t size, size_t alignment, bool zero, tcache_t *tcache);
 dss_prec_t	arena_dss_prec_get(arena_t *arena);
 bool	arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec);
 ssize_t	arena_lg_dirty_mult_default_get(void);
@@ -519,17 +519,19 @@ arena_chunk_map_misc_t	*arena_run_to_mis
 size_t	*arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind);
 size_t	arena_mapbitsp_read(size_t *mapbitsp);
 size_t	arena_mapbits_get(arena_chunk_t *chunk, size_t pageind);
+size_t	arena_mapbits_size_decode(size_t mapbits);
 size_t	arena_mapbits_unallocated_size_get(arena_chunk_t *chunk,
     size_t pageind);
 size_t	arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind);
 size_t	arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind);
-index_t	arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind);
+szind_t	arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind);
 size_t	arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind);
 size_t	arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind);
 size_t	arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind);
 size_t	arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind);
 size_t	arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind);
 void	arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits);
+size_t	arena_mapbits_size_encode(size_t size);
 void	arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind,
     size_t size, size_t flags);
 void	arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind,
@@ -539,21 +541,23 @@ void	arena_mapbits_internal_set(arena_ch
 void	arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind,
     size_t size, size_t flags);
 void	arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind,
-    index_t binind);
+    szind_t binind);
 void	arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind,
-    size_t runind, index_t binind, size_t flags);
+    size_t runind, szind_t binind, size_t flags);
 void	arena_metadata_allocated_add(arena_t *arena, size_t size);
 void	arena_metadata_allocated_sub(arena_t *arena, size_t size);
 size_t	arena_metadata_allocated_get(arena_t *arena);
 bool	arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes);
 bool	arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes);
 bool	arena_prof_accum(arena_t *arena, uint64_t accumbytes);
-index_t	arena_ptr_small_binind_get(const void *ptr, size_t mapbits);
-index_t	arena_bin_index(arena_t *arena, arena_bin_t *bin);
+szind_t	arena_ptr_small_binind_get(const void *ptr, size_t mapbits);
+szind_t	arena_bin_index(arena_t *arena, arena_bin_t *bin);
 unsigned	arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info,
     const void *ptr);
 prof_tctx_t	*arena_prof_tctx_get(const void *ptr);
-void	arena_prof_tctx_set(const void *ptr, prof_tctx_t *tctx);
+void	arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
+void	arena_prof_tctx_reset(const void *ptr, size_t usize,
+    const void *old_ptr, prof_tctx_t *old_tctx);
 void	*arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
     tcache_t *tcache);
 arena_t	*arena_aalloc(const void *ptr);
@@ -653,13 +657,29 @@ arena_mapbits_get(arena_chunk_t *chunk, 
 }
 
 JEMALLOC_ALWAYS_INLINE size_t
+arena_mapbits_size_decode(size_t mapbits)
+{
+	size_t size;
+
+#if CHUNK_MAP_SIZE_SHIFT > 0
+	size = (mapbits & CHUNK_MAP_SIZE_MASK) >> CHUNK_MAP_SIZE_SHIFT;
+#elif CHUNK_MAP_SIZE_SHIFT == 0
+	size = mapbits & CHUNK_MAP_SIZE_MASK;
+#else
+	size = (mapbits & CHUNK_MAP_SIZE_MASK) << -CHUNK_MAP_SIZE_SHIFT;
+#endif
+
+	return (size);
+}
+
+JEMALLOC_ALWAYS_INLINE size_t
 arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind)
 {
 	size_t mapbits;
 
 	mapbits = arena_mapbits_get(chunk, pageind);
 	assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0);
-	return ((mapbits & CHUNK_MAP_SIZE_MASK) >> CHUNK_MAP_SIZE_SHIFT);
+	return (arena_mapbits_size_decode(mapbits));
 }
 
 JEMALLOC_ALWAYS_INLINE size_t
@@ -670,7 +690,7 @@ arena_mapbits_large_size_get(arena_chunk
 	mapbits = arena_mapbits_get(chunk, pageind);
 	assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) ==
 	    (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED));
-	return ((mapbits & CHUNK_MAP_SIZE_MASK) >> CHUNK_MAP_SIZE_SHIFT);
+	return (arena_mapbits_size_decode(mapbits));
 }
 
 JEMALLOC_ALWAYS_INLINE size_t
@@ -684,11 +704,11 @@ arena_mapbits_small_runind_get(arena_chu
 	return (mapbits >> CHUNK_MAP_RUNIND_SHIFT);
 }
 
-JEMALLOC_ALWAYS_INLINE index_t
+JEMALLOC_ALWAYS_INLINE szind_t
 arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind)
 {
 	size_t mapbits;
-	index_t binind;
+	szind_t binind;
 
 	mapbits = arena_mapbits_get(chunk, pageind);
 	binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT;
@@ -754,6 +774,23 @@ arena_mapbitsp_write(size_t *mapbitsp, s
 	*mapbitsp = mapbits;
 }
 
+JEMALLOC_ALWAYS_INLINE size_t
+arena_mapbits_size_encode(size_t size)
+{
+	size_t mapbits;
+
+#if CHUNK_MAP_SIZE_SHIFT > 0
+	mapbits = size << CHUNK_MAP_SIZE_SHIFT;
+#elif CHUNK_MAP_SIZE_SHIFT == 0
+	mapbits = size;
+#else
+	mapbits = size >> -CHUNK_MAP_SIZE_SHIFT;
+#endif
+
+	assert((mapbits & ~CHUNK_MAP_SIZE_MASK) == 0);
+	return (mapbits);
+}
+
 JEMALLOC_ALWAYS_INLINE void
 arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size,
     size_t flags)
@@ -761,11 +798,10 @@ arena_mapbits_unallocated_set(arena_chun
 	size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
 
 	assert((size & PAGE_MASK) == 0);
-	assert(((size << CHUNK_MAP_SIZE_SHIFT) & ~CHUNK_MAP_SIZE_MASK) == 0);
 	assert((flags & CHUNK_MAP_FLAGS_MASK) == flags);
 	assert((flags & CHUNK_MAP_DECOMMITTED) == 0 || (flags &
 	    (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0);
-	arena_mapbitsp_write(mapbitsp, (size << CHUNK_MAP_SIZE_SHIFT) |
+	arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) |
 	    CHUNK_MAP_BININD_INVALID | flags);
 }
 
@@ -777,10 +813,9 @@ arena_mapbits_unallocated_size_set(arena
 	size_t mapbits = arena_mapbitsp_read(mapbitsp);
 
 	assert((size & PAGE_MASK) == 0);
-	assert(((size << CHUNK_MAP_SIZE_SHIFT) & ~CHUNK_MAP_SIZE_MASK) == 0);
 	assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0);
-	arena_mapbitsp_write(mapbitsp, (size << CHUNK_MAP_SIZE_SHIFT) | (mapbits
-	    & ~CHUNK_MAP_SIZE_MASK));
+	arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) |
+	    (mapbits & ~CHUNK_MAP_SIZE_MASK));
 }
 
 JEMALLOC_ALWAYS_INLINE void
@@ -799,18 +834,17 @@ arena_mapbits_large_set(arena_chunk_t *c
 	size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
 
 	assert((size & PAGE_MASK) == 0);
-	assert(((size << CHUNK_MAP_SIZE_SHIFT) & ~CHUNK_MAP_SIZE_MASK) == 0);
 	assert((flags & CHUNK_MAP_FLAGS_MASK) == flags);
 	assert((flags & CHUNK_MAP_DECOMMITTED) == 0 || (flags &
 	    (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0);
-	arena_mapbitsp_write(mapbitsp, (size << CHUNK_MAP_SIZE_SHIFT) |
+	arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) |
 	    CHUNK_MAP_BININD_INVALID | flags | CHUNK_MAP_LARGE |
 	    CHUNK_MAP_ALLOCATED);
 }
 
 JEMALLOC_ALWAYS_INLINE void
 arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind,
-    index_t binind)
+    szind_t binind)
 {
 	size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
 	size_t mapbits = arena_mapbitsp_read(mapbitsp);
@@ -824,7 +858,7 @@ arena_mapbits_large_binind_set(arena_chu
 
 JEMALLOC_ALWAYS_INLINE void
 arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind,
-    index_t binind, size_t flags)
+    szind_t binind, size_t flags)
 {
 	size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind);
 
@@ -901,10 +935,10 @@ arena_prof_accum(arena_t *arena, uint64_
 	}
 }
 
-JEMALLOC_ALWAYS_INLINE index_t
+JEMALLOC_ALWAYS_INLINE szind_t
 arena_ptr_small_binind_get(const void *ptr, size_t mapbits)
 {
-	index_t binind;
+	szind_t binind;
 
 	binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT;
 
@@ -916,7 +950,7 @@ arena_ptr_small_binind_get(const void *p
 		size_t rpages_ind;
 		arena_run_t *run;
 		arena_bin_t *bin;
-		index_t run_binind, actual_binind;
+		szind_t run_binind, actual_binind;
 		arena_bin_info_t *bin_info;
 		arena_chunk_map_misc_t *miscelm;
 		void *rpages;
@@ -950,10 +984,10 @@ arena_ptr_small_binind_get(const void *p
 #  endif /* JEMALLOC_ARENA_INLINE_A */
 
 #  ifdef JEMALLOC_ARENA_INLINE_B
-JEMALLOC_INLINE index_t
+JEMALLOC_INLINE szind_t
 arena_bin_index(arena_t *arena, arena_bin_t *bin)
 {
-	index_t binind = bin - arena->bins;
+	szind_t binind = bin - arena->bins;
 	assert(binind < NBINS);
 	return (binind);
 }
@@ -1060,7 +1094,7 @@ arena_prof_tctx_get(const void *ptr)
 }
 
 JEMALLOC_INLINE void
-arena_prof_tctx_set(const void *ptr, prof_tctx_t *tctx)
+arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
 {
 	arena_chunk_t *chunk;
 
@@ -1070,17 +1104,59 @@ arena_prof_tctx_set(const void *ptr, pro
 	chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
 	if (likely(chunk != ptr)) {
 		size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
+
 		assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
 
-		if (unlikely(arena_mapbits_large_get(chunk, pageind) != 0)) {
-			arena_chunk_map_misc_t *elm = arena_miscelm_get(chunk,
-			    pageind);
+		if (unlikely(usize > SMALL_MAXCLASS || (uintptr_t)tctx >
+		    (uintptr_t)1U)) {
+			arena_chunk_map_misc_t *elm;
+
+			assert(arena_mapbits_large_get(chunk, pageind) != 0);
+
+			elm = arena_miscelm_get(chunk, pageind);
 			atomic_write_p(&elm->prof_tctx_pun, tctx);
+		} else {
+			/*
+			 * tctx must always be initialized for large runs.
+			 * Assert that the surrounding conditional logic is
+			 * equivalent to checking whether ptr refers to a large
+			 * run.
+			 */
+			assert(arena_mapbits_large_get(chunk, pageind) == 0);
 		}
 	} else
 		huge_prof_tctx_set(ptr, tctx);
 }
 
+JEMALLOC_INLINE void
+arena_prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
+    prof_tctx_t *old_tctx)
+{
+
+	cassert(config_prof);
+	assert(ptr != NULL);
+
+	if (unlikely(usize > SMALL_MAXCLASS || (ptr == old_ptr &&
+	    (uintptr_t)old_tctx > (uintptr_t)1U))) {
+		arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
+		if (likely(chunk != ptr)) {
+			size_t pageind;
+			arena_chunk_map_misc_t *elm;
+
+			pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >>
+			    LG_PAGE;
+			assert(arena_mapbits_allocated_get(chunk, pageind) !=
+			    0);
+			assert(arena_mapbits_large_get(chunk, pageind) != 0);
+
+			elm = arena_miscelm_get(chunk, pageind);
+			atomic_write_p(&elm->prof_tctx_pun,
+			    (prof_tctx_t *)(uintptr_t)1U);
+		} else
+			huge_prof_tctx_reset(ptr);
+	}
+}
+
 JEMALLOC_ALWAYS_INLINE void *
 arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
     tcache_t *tcache)
@@ -1098,7 +1174,7 @@ arena_malloc(tsd_t *tsd, arena_t *arena,
 			    zero));
 		} else
 			return (arena_malloc_small(arena, size, zero));
-	} else if (likely(size <= arena_maxclass)) {
+	} else if (likely(size <= large_maxclass)) {
 		/*
 		 * Initialize tcache after checking size in order to avoid
 		 * infinite recursion during tcache initialization.
@@ -1131,7 +1207,7 @@ arena_salloc(const void *ptr, bool demot
 	size_t ret;
 	arena_chunk_t *chunk;
 	size_t pageind;
-	index_t binind;
+	szind_t binind;
 
 	assert(ptr != NULL);
 
@@ -1190,7 +1266,7 @@ arena_dalloc(tsd_t *tsd, void *ptr, tcac
 		if (likely((mapbits & CHUNK_MAP_LARGE) == 0)) {
 			/* Small allocation. */
 			if (likely(tcache != NULL)) {
-				index_t binind = arena_ptr_small_binind_get(ptr,
+				szind_t binind = arena_ptr_small_binind_get(ptr,
 				    mapbits);
 				tcache_dalloc_small(tsd, tcache, ptr, binind);
 			} else {
@@ -1242,7 +1318,7 @@ arena_sdalloc(tsd_t *tsd, void *ptr, siz
 		if (likely(size <= SMALL_MAXCLASS)) {
 			/* Small allocation. */
 			if (likely(tcache != NULL)) {
-				index_t binind = size2index(size);
+				szind_t binind = size2index(size);
 				tcache_dalloc_small(tsd, tcache, ptr, binind);
 			} else {
 				size_t pageind = ((uintptr_t)ptr -

Modified: head/contrib/jemalloc/include/jemalloc/internal/huge.h
==============================================================================
--- head/contrib/jemalloc/include/jemalloc/internal/huge.h	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/include/jemalloc/internal/huge.h	Tue Sep 22 03:02:18 2015	(r288090)
@@ -13,11 +13,10 @@ void	*huge_malloc(tsd_t *tsd, arena_t *a
     tcache_t *tcache);
 void	*huge_palloc(tsd_t *tsd, arena_t *arena, size_t size, size_t alignment,
     bool zero, tcache_t *tcache);
-bool	huge_ralloc_no_move(void *ptr, size_t oldsize, size_t size,
-    size_t extra, bool zero);
+bool	huge_ralloc_no_move(void *ptr, size_t oldsize, size_t usize_min,
+    size_t usize_max, bool zero);
 void	*huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
-    size_t size, size_t extra, size_t alignment, bool zero,
-    tcache_t *tcache);
+    size_t usize, size_t alignment, bool zero, tcache_t *tcache);
 #ifdef JEMALLOC_JET
 typedef void (huge_dalloc_junk_t)(void *, size_t);
 extern huge_dalloc_junk_t *huge_dalloc_junk;
@@ -27,6 +26,7 @@ arena_t	*huge_aalloc(const void *ptr);
 size_t	huge_salloc(const void *ptr);
 prof_tctx_t	*huge_prof_tctx_get(const void *ptr);
 void	huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx);
+void	huge_prof_tctx_reset(const void *ptr);
 
 #endif /* JEMALLOC_H_EXTERNS */
 /******************************************************************************/

Modified: head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h
==============================================================================
--- head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h	Tue Sep 22 03:02:18 2015	(r288090)
@@ -181,7 +181,7 @@ static const bool config_cache_oblivious
 #include "jemalloc/internal/jemalloc_internal_macros.h"
 
 /* Size class index type. */
-typedef unsigned index_t;
+typedef unsigned szind_t;
 
 /*
  * Flags bits:
@@ -229,7 +229,7 @@ typedef unsigned index_t;
 #  ifdef __alpha__
 #    define LG_QUANTUM		4
 #  endif
-#  ifdef __sparc64__
+#  if (defined(__sparc64__) || defined(__sparcv9))
 #    define LG_QUANTUM		4
 #  endif
 #  if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64))
@@ -508,12 +508,12 @@ void	jemalloc_postfork_child(void);
 #include "jemalloc/internal/huge.h"
 
 #ifndef JEMALLOC_ENABLE_INLINE
-index_t	size2index_compute(size_t size);
-index_t	size2index_lookup(size_t size);
-index_t	size2index(size_t size);
-size_t	index2size_compute(index_t index);
-size_t	index2size_lookup(index_t index);
-size_t	index2size(index_t index);
+szind_t	size2index_compute(size_t size);
+szind_t	size2index_lookup(size_t size);
+szind_t	size2index(size_t size);
+size_t	index2size_compute(szind_t index);
+size_t	index2size_lookup(szind_t index);
+size_t	index2size(szind_t index);
 size_t	s2u_compute(size_t size);
 size_t	s2u_lookup(size_t size);
 size_t	s2u(size_t size);
@@ -524,7 +524,7 @@ arena_t	*arena_get(tsd_t *tsd, unsigned 
 #endif
 
 #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_))
-JEMALLOC_INLINE index_t
+JEMALLOC_INLINE szind_t
 size2index_compute(size_t size)
 {
 
@@ -555,7 +555,7 @@ size2index_compute(size_t size)
 	}
 }
 
-JEMALLOC_ALWAYS_INLINE index_t
+JEMALLOC_ALWAYS_INLINE szind_t
 size2index_lookup(size_t size)
 {
 
@@ -568,7 +568,7 @@ size2index_lookup(size_t size)
 	}
 }
 
-JEMALLOC_ALWAYS_INLINE index_t
+JEMALLOC_ALWAYS_INLINE szind_t
 size2index(size_t size)
 {
 
@@ -579,7 +579,7 @@ size2index(size_t size)
 }
 
 JEMALLOC_INLINE size_t
-index2size_compute(index_t index)
+index2size_compute(szind_t index)
 {
 
 #if (NTBINS > 0)
@@ -606,7 +606,7 @@ index2size_compute(index_t index)
 }
 
 JEMALLOC_ALWAYS_INLINE size_t
-index2size_lookup(index_t index)
+index2size_lookup(szind_t index)
 {
 	size_t ret = (size_t)index2size_tab[index];
 	assert(ret == index2size_compute(index));
@@ -614,7 +614,7 @@ index2size_lookup(index_t index)
 }
 
 JEMALLOC_ALWAYS_INLINE size_t
-index2size(index_t index)
+index2size(szind_t index)
 {
 
 	assert(index < NSIZES);
@@ -702,7 +702,7 @@ sa2u(size_t size, size_t alignment)
 	}
 
 	/* Try for a large size class. */
-	if (likely(size <= arena_maxclass) && likely(alignment < chunksize)) {
+	if (likely(size <= large_maxclass) && likely(alignment < chunksize)) {
 		/*
 		 * We can't achieve subpage alignment, so round up alignment
 		 * to the minimum that can actually be supported.
@@ -973,7 +973,7 @@ u2rz(size_t usize)
 	size_t ret;
 
 	if (usize <= SMALL_MAXCLASS) {
-		index_t binind = size2index(usize);
+		szind_t binind = size2index(usize);
 		ret = arena_bin_info[binind].redzone_size;
 	} else
 		ret = 0;
@@ -1093,7 +1093,7 @@ iralloct(tsd_t *tsd, void *ptr, size_t o
 		    zero, tcache, arena));
 	}
 
-	return (arena_ralloc(tsd, arena, ptr, oldsize, size, 0, alignment, zero,
+	return (arena_ralloc(tsd, arena, ptr, oldsize, size, alignment, zero,
 	    tcache));
 }
 

Modified: head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h
==============================================================================
--- head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h	Tue Sep 22 03:02:18 2015	(r288090)
@@ -50,13 +50,14 @@
 #define	arena_mapbitsp_get JEMALLOC_N(arena_mapbitsp_get)
 #define	arena_mapbitsp_read JEMALLOC_N(arena_mapbitsp_read)
 #define	arena_mapbitsp_write JEMALLOC_N(arena_mapbitsp_write)
+#define	arena_mapbits_size_decode JEMALLOC_N(arena_mapbits_size_decode)
+#define	arena_mapbits_size_encode JEMALLOC_N(arena_mapbits_size_encode)
 #define	arena_mapbits_small_runind_get JEMALLOC_N(arena_mapbits_small_runind_get)
 #define	arena_mapbits_small_set JEMALLOC_N(arena_mapbits_small_set)
 #define	arena_mapbits_unallocated_set JEMALLOC_N(arena_mapbits_unallocated_set)
 #define	arena_mapbits_unallocated_size_get JEMALLOC_N(arena_mapbits_unallocated_size_get)
 #define	arena_mapbits_unallocated_size_set JEMALLOC_N(arena_mapbits_unallocated_size_set)
 #define	arena_mapbits_unzeroed_get JEMALLOC_N(arena_mapbits_unzeroed_get)
-#define	arena_maxclass JEMALLOC_N(arena_maxclass)
 #define	arena_maxrun JEMALLOC_N(arena_maxrun)
 #define	arena_maybe_purge JEMALLOC_N(arena_maybe_purge)
 #define	arena_metadata_allocated_add JEMALLOC_N(arena_metadata_allocated_add)
@@ -79,6 +80,7 @@
 #define	arena_prof_accum_locked JEMALLOC_N(arena_prof_accum_locked)
 #define	arena_prof_promoted JEMALLOC_N(arena_prof_promoted)
 #define	arena_prof_tctx_get JEMALLOC_N(arena_prof_tctx_get)
+#define	arena_prof_tctx_reset JEMALLOC_N(arena_prof_tctx_reset)
 #define	arena_prof_tctx_set JEMALLOC_N(arena_prof_tctx_set)
 #define	arena_ptr_small_binind_get JEMALLOC_N(arena_ptr_small_binind_get)
 #define	arena_purge_all JEMALLOC_N(arena_purge_all)
@@ -249,6 +251,7 @@
 #define	huge_malloc JEMALLOC_N(huge_malloc)
 #define	huge_palloc JEMALLOC_N(huge_palloc)
 #define	huge_prof_tctx_get JEMALLOC_N(huge_prof_tctx_get)
+#define	huge_prof_tctx_reset JEMALLOC_N(huge_prof_tctx_reset)
 #define	huge_prof_tctx_set JEMALLOC_N(huge_prof_tctx_set)
 #define	huge_ralloc JEMALLOC_N(huge_ralloc)
 #define	huge_ralloc_no_move JEMALLOC_N(huge_ralloc_no_move)
@@ -282,6 +285,7 @@
 #define	jemalloc_postfork_child JEMALLOC_N(jemalloc_postfork_child)
 #define	jemalloc_postfork_parent JEMALLOC_N(jemalloc_postfork_parent)
 #define	jemalloc_prefork JEMALLOC_N(jemalloc_prefork)
+#define	large_maxclass JEMALLOC_N(large_maxclass)
 #define	lg_floor JEMALLOC_N(lg_floor)
 #define	malloc_cprintf JEMALLOC_N(malloc_cprintf)
 #define	malloc_mutex_init JEMALLOC_N(malloc_mutex_init)
@@ -376,6 +380,7 @@
 #define	prof_sample_accum_update JEMALLOC_N(prof_sample_accum_update)
 #define	prof_sample_threshold_update JEMALLOC_N(prof_sample_threshold_update)
 #define	prof_tctx_get JEMALLOC_N(prof_tctx_get)
+#define	prof_tctx_reset JEMALLOC_N(prof_tctx_reset)
 #define	prof_tctx_set JEMALLOC_N(prof_tctx_set)
 #define	prof_tdata_cleanup JEMALLOC_N(prof_tdata_cleanup)
 #define	prof_tdata_get JEMALLOC_N(prof_tdata_get)

Modified: head/contrib/jemalloc/include/jemalloc/internal/prof.h
==============================================================================
--- head/contrib/jemalloc/include/jemalloc/internal/prof.h	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/include/jemalloc/internal/prof.h	Tue Sep 22 03:02:18 2015	(r288090)
@@ -90,10 +90,11 @@ struct prof_tctx_s {
 	prof_tdata_t		*tdata;
 
 	/*
-	 * Copy of tdata->thr_uid, necessary because tdata may be defunct during
-	 * teardown.
+	 * Copy of tdata->thr_{uid,discrim}, necessary because tdata may be
+	 * defunct during teardown.
 	 */
 	uint64_t		thr_uid;
+	uint64_t		thr_discrim;
 
 	/* Profiling counters, protected by tdata->lock. */
 	prof_cnt_t		cnts;
@@ -330,14 +331,18 @@ bool	prof_gdump_get_unlocked(void);
 prof_tdata_t	*prof_tdata_get(tsd_t *tsd, bool create);
 bool	prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit,
     prof_tdata_t **tdata_out);
-prof_tctx_t	*prof_alloc_prep(tsd_t *tsd, size_t usize, bool update);
+prof_tctx_t	*prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active,
+    bool update);
 prof_tctx_t	*prof_tctx_get(const void *ptr);
-void	prof_tctx_set(const void *ptr, prof_tctx_t *tctx);
+void	prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
+void	prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
+    prof_tctx_t *tctx);
 void	prof_malloc_sample_object(const void *ptr, size_t usize,
     prof_tctx_t *tctx);
 void	prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx);
 void	prof_realloc(tsd_t *tsd, const void *ptr, size_t usize,
-    prof_tctx_t *tctx, bool updated, size_t old_usize, prof_tctx_t *old_tctx);
+    prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr,
+    size_t old_usize, prof_tctx_t *old_tctx);
 void	prof_free(tsd_t *tsd, const void *ptr, size_t usize);
 #endif
 
@@ -402,13 +407,24 @@ prof_tctx_get(const void *ptr)
 }
 
 JEMALLOC_ALWAYS_INLINE void
-prof_tctx_set(const void *ptr, prof_tctx_t *tctx)
+prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
 {
 
 	cassert(config_prof);
 	assert(ptr != NULL);
 
-	arena_prof_tctx_set(ptr, tctx);
+	arena_prof_tctx_set(ptr, usize, tctx);
+}
+
+JEMALLOC_ALWAYS_INLINE void
+prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
+    prof_tctx_t *old_tctx)
+{
+
+	cassert(config_prof);
+	assert(ptr != NULL);
+
+	arena_prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
 }
 
 JEMALLOC_ALWAYS_INLINE bool
@@ -442,7 +458,7 @@ prof_sample_accum_update(tsd_t *tsd, siz
 }
 
 JEMALLOC_ALWAYS_INLINE prof_tctx_t *
-prof_alloc_prep(tsd_t *tsd, size_t usize, bool update)
+prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, bool update)
 {
 	prof_tctx_t *ret;
 	prof_tdata_t *tdata;
@@ -450,8 +466,8 @@ prof_alloc_prep(tsd_t *tsd, size_t usize
 
 	assert(usize == s2u(usize));
 
-	if (!prof_active_get_unlocked() || likely(prof_sample_accum_update(tsd,
-	    usize, update, &tdata)))
+	if (!prof_active || likely(prof_sample_accum_update(tsd, usize, update,
+	    &tdata)))
 		ret = (prof_tctx_t *)(uintptr_t)1U;
 	else {
 		bt_init(&bt, tdata->vec);
@@ -473,22 +489,24 @@ prof_malloc(const void *ptr, size_t usiz
 	if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
 		prof_malloc_sample_object(ptr, usize, tctx);
 	else
-		prof_tctx_set(ptr, (prof_tctx_t *)(uintptr_t)1U);
+		prof_tctx_set(ptr, usize, (prof_tctx_t *)(uintptr_t)1U);
 }
 
 JEMALLOC_ALWAYS_INLINE void
 prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
-    bool updated, size_t old_usize, prof_tctx_t *old_tctx)
+    bool prof_active, bool updated, const void *old_ptr, size_t old_usize,
+    prof_tctx_t *old_tctx)
 {
+	bool sampled, old_sampled;
 
 	cassert(config_prof);
 	assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U);
 
-	if (!updated && ptr != NULL) {
+	if (prof_active && !updated && ptr != NULL) {
 		assert(usize == isalloc(ptr, true));
 		if (prof_sample_accum_update(tsd, usize, true, NULL)) {
 			/*
-			 * Don't sample.  The usize passed to PROF_ALLOC_PREP()
+			 * Don't sample.  The usize passed to prof_alloc_prep()
 			 * was larger than what actually got allocated, so a
 			 * backtrace was captured for this allocation, even
 			 * though its actual usize was insufficient to cross the
@@ -498,12 +516,16 @@ prof_realloc(tsd_t *tsd, const void *ptr
 		}
 	}
 
-	if (unlikely((uintptr_t)old_tctx > (uintptr_t)1U))
-		prof_free_sampled_object(tsd, old_usize, old_tctx);
-	if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
+	sampled = ((uintptr_t)tctx > (uintptr_t)1U);
+	old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U);
+
+	if (unlikely(sampled))
 		prof_malloc_sample_object(ptr, usize, tctx);
 	else
-		prof_tctx_set(ptr, (prof_tctx_t *)(uintptr_t)1U);
+		prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
+
+	if (unlikely(old_sampled))
+		prof_free_sampled_object(tsd, old_usize, old_tctx);
 }
 
 JEMALLOC_ALWAYS_INLINE void

Modified: head/contrib/jemalloc/include/jemalloc/internal/size_classes.h
==============================================================================
--- head/contrib/jemalloc/include/jemalloc/internal/size_classes.h	Tue Sep 22 02:57:18 2015	(r288089)
+++ head/contrib/jemalloc/include/jemalloc/internal/size_classes.h	Tue Sep 22 03:02:18 2015	(r288090)
@@ -25,6 +25,7 @@
  *   LOOKUP_MAXCLASS: Maximum size class included in lookup table.
  *   SMALL_MAXCLASS: Maximum small size class.
  *   LG_LARGE_MINCLASS: Lg of minimum large size class.
+ *   HUGE_MAXCLASS: Maximum (huge) size class.
  */
 
 #define	LG_SIZE_CLASS_GROUP	2
@@ -180,6 +181,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 13) + (((size_t)3) << 11))
 #define	LG_LARGE_MINCLASS	14
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 13)
@@ -333,6 +335,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 14) + (((size_t)3) << 12))
 #define	LG_LARGE_MINCLASS	15
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 14)
@@ -486,6 +489,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 15) + (((size_t)3) << 13))
 #define	LG_LARGE_MINCLASS	16
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 16)
@@ -639,6 +643,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 17) + (((size_t)3) << 15))
 #define	LG_LARGE_MINCLASS	18
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 12)
@@ -789,6 +794,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 13) + (((size_t)3) << 11))
 #define	LG_LARGE_MINCLASS	14
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 13)
@@ -939,6 +945,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 14) + (((size_t)3) << 12))
 #define	LG_LARGE_MINCLASS	15
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 14)
@@ -1089,6 +1096,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 15) + (((size_t)3) << 13))
 #define	LG_LARGE_MINCLASS	16
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 16)
@@ -1239,6 +1247,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 17) + (((size_t)3) << 15))
 #define	LG_LARGE_MINCLASS	18
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 12)
@@ -1387,6 +1396,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 13) + (((size_t)3) << 11))
 #define	LG_LARGE_MINCLASS	14
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 13)
@@ -1535,6 +1545,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 14) + (((size_t)3) << 12))
 #define	LG_LARGE_MINCLASS	15
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 14)
@@ -1683,6 +1694,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 15) + (((size_t)3) << 13))
 #define	LG_LARGE_MINCLASS	16
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 2 && LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 16)
@@ -1831,6 +1843,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 17) + (((size_t)3) << 15))
 #define	LG_LARGE_MINCLASS	18
+#define	HUGE_MAXCLASS		((((size_t)1) << 31) + (((size_t)3) << 29))
 #endif
 
 #if (LG_SIZEOF_PTR == 3 && LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 12)
@@ -2144,6 +2157,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 13) + (((size_t)3) << 11))
 #define	LG_LARGE_MINCLASS	14
+#define	HUGE_MAXCLASS		((((size_t)1) << 63) + (((size_t)3) << 61))
 #endif
 
 #if (LG_SIZEOF_PTR == 3 && LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 13)
@@ -2457,6 +2471,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 14) + (((size_t)3) << 12))
 #define	LG_LARGE_MINCLASS	15
+#define	HUGE_MAXCLASS		((((size_t)1) << 63) + (((size_t)3) << 61))
 #endif
 
 #if (LG_SIZEOF_PTR == 3 && LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 14)
@@ -2770,6 +2785,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 15) + (((size_t)3) << 13))
 #define	LG_LARGE_MINCLASS	16
+#define	HUGE_MAXCLASS		((((size_t)1) << 63) + (((size_t)3) << 61))
 #endif
 
 #if (LG_SIZEOF_PTR == 3 && LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 16)
@@ -3083,6 +3099,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 17) + (((size_t)3) << 15))
 #define	LG_LARGE_MINCLASS	18
+#define	HUGE_MAXCLASS		((((size_t)1) << 63) + (((size_t)3) << 61))
 #endif
 
 #if (LG_SIZEOF_PTR == 3 && LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 12)
@@ -3393,6 +3410,7 @@
 #define	LOOKUP_MAXCLASS		((((size_t)1) << 11) + (((size_t)4) << 9))
 #define	SMALL_MAXCLASS		((((size_t)1) << 13) + (((size_t)3) << 11))
 #define	LG_LARGE_MINCLASS	14
+#define	HUGE_MAXCLASS		((((size_t)1) << 63) + (((size_t)3) << 61))

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201509220302.t8M32JVj092954>