Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 21 Jan 2021 21:34:28 GMT
From:      Konstantin Belousov <kib@FreeBSD.org>
To:        src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org
Subject:   git: 1ac7c34486ab - main - malloc_aligned: roundup allocation size up to next power of two
Message-ID:  <202101212134.10LLYSL5030084@gitrepo.freebsd.org>

next in thread | raw e-mail | index | archive | help
The branch main has been updated by kib:

URL: https://cgit.FreeBSD.org/src/commit/?id=1ac7c34486ab9177c2472278739568d4607e1acc

commit 1ac7c34486ab9177c2472278739568d4607e1acc
Author:     Konstantin Belousov <kib@FreeBSD.org>
AuthorDate: 2021-01-18 21:17:21 +0000
Commit:     Konstantin Belousov <kib@FreeBSD.org>
CommitDate: 2021-01-21 21:34:10 +0000

    malloc_aligned: roundup allocation size up to next power of two
    
    to make it use the right aligned zone.
    
    Reported by:    melifaro
    Reviewed by:    alc, markj (previous version)
    Discussed with: jrtc27
    Tested by:      pho (previous version)
    MFC after:      1 week
    Sponsored by:   The FreeBSD Foundation
    Differential Revision:  https://reviews.freebsd.org/D28219
---
 sys/kern/kern_malloc.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/sys/kern/kern_malloc.c b/sys/kern/kern_malloc.c
index d3a151ad14e2..eff9e62c9a10 100644
--- a/sys/kern/kern_malloc.c
+++ b/sys/kern/kern_malloc.c
@@ -768,6 +768,7 @@ malloc_domainset_aligned(size_t size, size_t align,
     struct malloc_type *mtp, struct domainset *ds, int flags)
 {
 	void *res;
+	size_t asize;
 
 	KASSERT(align != 0 && powerof2(align),
 	    ("malloc_domainset_aligned: wrong align %#zx size %#zx",
@@ -776,12 +777,20 @@ malloc_domainset_aligned(size_t size, size_t align,
 	    ("malloc_domainset_aligned: align %#zx (size %#zx) too large",
 	    align, size));
 
-	if (size < align)
-		size = align;
-	res = malloc_domainset(size, mtp, ds, flags);
+	/*
+	 * Round the allocation size up to the next power of 2,
+	 * because we can only guarantee alignment for
+	 * power-of-2-sized allocations.  Further increase the
+	 * allocation size to align if the rounded size is less than
+	 * align, since malloc zones provide alignment equal to their
+	 * size.
+	 */
+	asize = size <= align ? align : 1UL << flsl(size - 1);
+
+	res = malloc_domainset(asize, mtp, ds, flags);
 	KASSERT(res == NULL || ((uintptr_t)res & (align - 1)) == 0,
 	    ("malloc_domainset_aligned: result not aligned %p size %#zx "
-	    "align %#zx", res, size, align));
+	    "allocsize %#zx align %#zx", res, size, asize, align));
 	return (res);
 }
 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202101212134.10LLYSL5030084>