Date: Mon, 1 Nov 2021 14:32:55 GMT From: Mark Johnston <markj@FreeBSD.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org Subject: git: 48d2c7cc305c - stable/13 - Add the KASAN runtime Message-ID: <202111011432.1A1EWtO3021243@gitrepo.freebsd.org>
next in thread | raw e-mail | index | archive | help
The branch stable/13 has been updated by markj: URL: https://cgit.FreeBSD.org/src/commit/?id=48d2c7cc305cc94542f95d3c2f378f63b92c8562 commit 48d2c7cc305cc94542f95d3c2f378f63b92c8562 Author: Mark Johnston <markj@FreeBSD.org> AuthorDate: 2021-04-13 21:39:19 +0000 Commit: Mark Johnston <markj@FreeBSD.org> CommitDate: 2021-11-01 13:56:31 +0000 Add the KASAN runtime KASAN enables the use of LLVM's AddressSanitizer in the kernel. This feature makes use of compiler instrumentation to validate memory accesses in the kernel and detect several types of bugs, including use-after-frees and out-of-bounds accesses. It is particularly effective when combined with test suites or syzkaller. KASAN has high CPU and memory usage overhead and so is not suited for production environments. The runtime and pmap maintain a shadow of the kernel map to store information about the validity of memory mapped at a given kernel address. The runtime implements a number of functions defined by the compiler ABI. These are prefixed by __asan. The compiler emits calls to __asan_load*() and __asan_store*() around memory accesses, and the runtime consults the shadow map to determine whether a given access is valid. kasan_mark() is called by various kernel allocators to update state in the shadow map. Updates to those allocators will come in subsequent commits. The runtime also defines various interceptors. Some low-level routines are implemented in assembly and are thus not amenable to compiler instrumentation. To handle this, the runtime implements these routines on behalf of the rest of the kernel. The sanitizer implementation validates memory accesses manually before handing off to the real implementation. The sanitizer in a KASAN-configured kernel can be disabled by setting the loader tunable debug.kasan.disable=1. Obtained from: NetBSD Sponsored by: The FreeBSD Foundation (cherry picked from commit 38da497a4dfcf1979c8c2b0e9f3fa0564035c147) --- share/man/man9/Makefile | 3 + share/man/man9/kasan.9 | 171 ++++++++ sys/conf/files | 2 + sys/kern/subr_asan.c | 1091 +++++++++++++++++++++++++++++++++++++++++++++++ sys/sys/asan.h | 68 +++ 5 files changed, 1335 insertions(+) diff --git a/share/man/man9/Makefile b/share/man/man9/Makefile index 8793eb6b7278..500cd252a3df 100644 --- a/share/man/man9/Makefile +++ b/share/man/man9/Makefile @@ -186,6 +186,7 @@ MAN= accept_filter.9 \ insmntque.9 \ intro.9 \ ithread.9 \ + kasan.9 \ KASSERT.9 \ kern_reboot.9 \ kern_testfrwk.9 \ @@ -1314,6 +1315,8 @@ MLINKS+=kernel_mount.9 free_mntarg.9 \ kernel_mount.9 mount_argb.9 \ kernel_mount.9 mount_argf.9 \ kernel_mount.9 mount_argsu.9 +MLINKS+=kasan.9 KASAN.9 \ + kasan.9 kasan_mark.9 MLINKS+=khelp.9 khelp_add_hhook.9 \ khelp.9 KHELP_DECLARE_MOD.9 \ khelp.9 KHELP_DECLARE_MOD_UMA.9 \ diff --git a/share/man/man9/kasan.9 b/share/man/man9/kasan.9 new file mode 100644 index 000000000000..ecc068209913 --- /dev/null +++ b/share/man/man9/kasan.9 @@ -0,0 +1,171 @@ +.\"- +.\" Copyright (c) 2021 The FreeBSD Foundation +.\" +.\" This documentation was written by Mark Johnston under sponsorship from +.\" the FreeBSD Foundation. +.\" +.\" Redistribution and use in source and binary forms, with or without +.\" modification, are permitted provided that the following conditions +.\" are met: +.\" 1. Redistributions of source code must retain the above copyright +.\" notice, this list of conditions and the following disclaimer. +.\" 2. Redistributions in binary form must reproduce the above copyright +.\" notice, this list of conditions and the following disclaimer in the +.\" documentation and/or other materials provided with the distribution. +.\" +.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND +.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +.\" SUCH DAMAGE. +.\" +.\" $FreeBSD$ +.\" +.Dd April 13, 2021 +.Dt KASAN 9 +.Os +.Sh NAME +.Nm kasan +.Nd kernel address sanitizer +.Sh SYNOPSIS +To compile KASAN into the kernel, place the following line in your kernel +configuration file: +.Bd -ragged -offset indent +.Cd "options KASAN" +.Ed +.Pp +.Ft void +.Fn kasan_mark "const void *addr" "size_t size" "size_t redzsize" "uint8_t code" +.Sh DESCRIPTION +.Nm +is a subsystem which leverages compiler instrumentation to detect invalid +memory accesses in the kernel. +Currently it is implemented only on the amd64 platform. +.Pp +When +.Nm +is compiled into the kernel, the compiler is configured to emit function +calls upon every memory access. +The functions are implemented by +.Nm +and permit run-time detection of several types of bugs including +use-after-frees, double frees and frees of invalid pointers, and out-of-bounds +accesses. +These protections apply to memory allocated by +.Xr uma 9 , +.Xr malloc 9 +and related functions, and +.Fn kmem_malloc +and related functions, +as well as global variables and kernel stacks. +.Nm +is conservative and will not detect all instances of these types of bugs. +Memory accesses through the kernel map are sanitized, but accesses via the +direct map are not. +When +.Nm +is configured, the kernel aims to minimize its use of the direct map. +.Sh IMPLEMENTATION NOTES +.Nm +is implemented using compiler instrumentation and a kernel runtime. +When a +kernel is built with the KASAN option enabled, the compiler inserts function calls +before most memory accesses in the generated code. +The runtime implements the corresponding functions, which decide whether a +given access is valid. +If not, the runtime prints a warning or panics the kernel, depending on the +value of the +.Sy debug.kasan.panic_on_violation +sysctl/tunable. +.Pp +The +.Nm +runtime works by maintaining a shadow map for the kernel map. +There exists a linear mapping between addresses in the kernel map and addresses +in the shadow map. +The shadow map is used to store information about the current state of +allocations from the kernel map. +For example, when a buffer is returned by +.Xr malloc 9 , +the corresponding region of the shadow map is marked to indicate that the +buffer is valid. +When it is freed, the shadow map is updated to mark the buffer as invalid. +Accesses to the buffer are intercepted by the +.Nm +runtime and validated using the contents of the shadow map. +.Pp +Upon booting, all kernel memory is marked as valid. +Kernel allocators must mark cached but free buffers as invalid, and must mark +them valid before freeing the kernel virtual address range. +This slightly reduces the effectiveness of +.Nm +but simplifies its maintenance and integration into the kernel. +.Pp +Updates to the shadow map are performed by calling +.Fn kasan_mark . +Parameter +.Fa addr +is the address of the buffer whose shadow is to be updated, +.Fa size +is the usable size of the buffer, and +.Fa redzsize +is the full size of the buffer allocated from lower layers of the system. +.Fa redzsize +must be greater than or equal to +.Fa size . +In some cases kernel allocators will return a buffer larger than that requested +by the consumer; the unused space at the end is referred to as a red zone and is +always marked as invalid. +.Fa code +allows the caller to specify an identifier used when marking a buffer as invalid. +The identifier is included in any reports generated by +.Nm +and helps identify the source of the invalid access. +For instance, when an item is freed to a +.Xr uma 9 +zone, the item is marked with +.Dv KASAN_UMA_FREED . +See +.In sys/asan.h +for the available identifiers. +If the entire buffer is to be marked valid, i.e., +.Fa size +and +.Fa redzsize +are equal, +.Fa code +should be 0. +.Sh SEE ALSO +.Xr malloc 9 , +.Xr memguard 9 , +.Xr redzone 9 , +.Xr uma 9 +.Sh HISTORY +.Nm +first appeared in +.Fx 14.0 . +.Sh BUGS +Accesses to kernel memory outside of the kernel map are ignored by the +.Nm +runtime. +When +.Nm +is configured, the kernel memory allocators are configured to use the kernel +map, but some uses of the direct map remain. +For example, on amd64, accesses to page table pages are not tracked. +.Pp +Some kernel memory allocators explicitly permit accesses after an object has +been freed. +These cannot be sanitized by +.Nm . +For example, memory from all +.Xr uma 9 +zones initialized with the +.Dv UMA_ZONE_NOFREE +flag are not sanitized. diff --git a/sys/conf/files b/sys/conf/files index a5f23ca09774..282a86d74347 100644 --- a/sys/conf/files +++ b/sys/conf/files @@ -3900,6 +3900,8 @@ kern/stack_protector.c standard \ compile-with "${NORMAL_C:N-fstack-protector*}" kern/subr_acl_nfs4.c optional ufs_acl | zfs kern/subr_acl_posix1e.c optional ufs_acl +kern/subr_asan.c optional kasan \ + compile-with "${NORMAL_C:N-fsanitize*}" kern/subr_autoconf.c standard kern/subr_blist.c standard kern/subr_boot.c standard diff --git a/sys/kern/subr_asan.c b/sys/kern/subr_asan.c new file mode 100644 index 000000000000..842370ad1e63 --- /dev/null +++ b/sys/kern/subr_asan.c @@ -0,0 +1,1091 @@ +/* $NetBSD: subr_asan.c,v 1.26 2020/09/10 14:10:46 maxv Exp $ */ + +/* + * Copyright (c) 2018-2020 Maxime Villard, m00nbsd.net + * All rights reserved. + * + * This code is part of the KASAN subsystem of the NetBSD kernel. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, + * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED + * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, + * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +#define SAN_RUNTIME + +#include <sys/cdefs.h> +__FBSDID("$FreeBSD$"); +#if 0 +__KERNEL_RCSID(0, "$NetBSD: subr_asan.c,v 1.26 2020/09/10 14:10:46 maxv Exp $"); +#endif + +#include <sys/param.h> +#include <sys/systm.h> +#include <sys/asan.h> +#include <sys/kernel.h> +#include <sys/stack.h> +#include <sys/sysctl.h> + +#include <machine/asan.h> + +/* ASAN constants. Part of the compiler ABI. */ +#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE - 1) +#define KASAN_ALLOCA_SCALE_SIZE 32 + +/* ASAN ABI version. */ +#if defined(__clang__) && (__clang_major__ - 0 >= 6) +#define ASAN_ABI_VERSION 8 +#elif __GNUC_PREREQ__(7, 1) && !defined(__clang__) +#define ASAN_ABI_VERSION 8 +#elif __GNUC_PREREQ__(6, 1) && !defined(__clang__) +#define ASAN_ABI_VERSION 6 +#else +#error "Unsupported compiler version" +#endif + +#define __RET_ADDR (unsigned long)__builtin_return_address(0) + +/* Global variable descriptor. Part of the compiler ABI. */ +struct __asan_global_source_location { + const char *filename; + int line_no; + int column_no; +}; + +struct __asan_global { + const void *beg; /* address of the global variable */ + size_t size; /* size of the global variable */ + size_t size_with_redzone; /* size with the redzone */ + const void *name; /* name of the variable */ + const void *module_name; /* name of the module where the var is declared */ + unsigned long has_dynamic_init; /* the var has dyn initializer (c++) */ + struct __asan_global_source_location *location; +#if ASAN_ABI_VERSION >= 7 + uintptr_t odr_indicator; /* the address of the ODR indicator symbol */ +#endif +}; + +FEATURE(kasan, "Kernel address sanitizer"); + +static SYSCTL_NODE(_debug, OID_AUTO, kasan, CTLFLAG_RD | CTLFLAG_MPSAFE, 0, + "KASAN options"); + +static int panic_on_violation = 1; +SYSCTL_INT(_debug_kasan, OID_AUTO, panic_on_violation, CTLFLAG_RDTUN, + &panic_on_violation, 0, + "Panic if an invalid access is detected"); + +static bool kasan_enabled __read_mostly = false; + +/* -------------------------------------------------------------------------- */ + +void +kasan_shadow_map(void *addr, size_t size) +{ + size_t sz, npages, i; + vm_offset_t sva, eva; + + KASSERT((vm_offset_t)addr % KASAN_SHADOW_SCALE == 0, + ("%s: invalid address %p", __func__, addr)); + + sz = roundup(size, KASAN_SHADOW_SCALE) / KASAN_SHADOW_SCALE; + + sva = kasan_md_addr_to_shad((vm_offset_t)addr); + eva = kasan_md_addr_to_shad((vm_offset_t)addr) + sz; + + sva = rounddown(sva, PAGE_SIZE); + eva = roundup(eva, PAGE_SIZE); + + npages = (eva - sva) / PAGE_SIZE; + + KASSERT(sva >= KASAN_MIN_ADDRESS && eva < KASAN_MAX_ADDRESS, + ("%s: invalid address range %#lx-%#lx", __func__, sva, eva)); + + for (i = 0; i < npages; i++) + pmap_kasan_enter(sva + ptoa(i)); +} + +void +kasan_init(void) +{ + int disabled; + + disabled = 0; + TUNABLE_INT_FETCH("debug.kasan.disabled", &disabled); + if (disabled) + return; + + /* MD initialization. */ + kasan_md_init(); + + /* Now officially enabled. */ + kasan_enabled = true; +} + +static inline const char * +kasan_code_name(uint8_t code) +{ + switch (code) { + case KASAN_GENERIC_REDZONE: + return "GenericRedZone"; + case KASAN_MALLOC_REDZONE: + return "MallocRedZone"; + case KASAN_KMEM_REDZONE: + return "KmemRedZone"; + case KASAN_UMA_FREED: + return "UMAUseAfterFree"; + case KASAN_KSTACK_FREED: + return "KernelStack"; + case 1 ... 7: + return "RedZonePartial"; + case KASAN_STACK_LEFT: + return "StackLeft"; + case KASAN_STACK_MID: + return "StackMiddle"; + case KASAN_STACK_RIGHT: + return "StackRight"; + case KASAN_USE_AFTER_RET: + return "UseAfterRet"; + case KASAN_USE_AFTER_SCOPE: + return "UseAfterScope"; + default: + return "Unknown"; + } +} + +#define REPORT(f, ...) do { \ + if (panic_on_violation) { \ + panic(f, __VA_ARGS__); \ + } else { \ + struct stack st; \ + \ + stack_save(&st); \ + printf(f "\n", __VA_ARGS__); \ + stack_print_ddb(&st); \ + } \ +} while (0) + +static void +kasan_report(unsigned long addr, size_t size, bool write, unsigned long pc, + uint8_t code) +{ + REPORT("ASan: Invalid access, %zu-byte %s at %#lx, %s(%x)", + size, (write ? "write" : "read"), addr, kasan_code_name(code), + code); +} + +static __always_inline void +kasan_shadow_1byte_markvalid(unsigned long addr) +{ + int8_t *byte = (int8_t *)kasan_md_addr_to_shad(addr); + int8_t last = (addr & KASAN_SHADOW_MASK) + 1; + + *byte = last; +} + +static __always_inline void +kasan_shadow_Nbyte_markvalid(const void *addr, size_t size) +{ + size_t i; + + for (i = 0; i < size; i++) { + kasan_shadow_1byte_markvalid((unsigned long)addr + i); + } +} + +static __always_inline void +kasan_shadow_Nbyte_fill(const void *addr, size_t size, uint8_t code) +{ + void *shad; + + if (__predict_false(size == 0)) + return; + if (__predict_false(kasan_md_unsupported((vm_offset_t)addr))) + return; + + KASSERT((vm_offset_t)addr % KASAN_SHADOW_SCALE == 0, + ("%s: invalid address %p", __func__, addr)); + KASSERT(size % KASAN_SHADOW_SCALE == 0, + ("%s: invalid size %zu", __func__, size)); + + shad = (void *)kasan_md_addr_to_shad((uintptr_t)addr); + size = size >> KASAN_SHADOW_SCALE_SHIFT; + + __builtin_memset(shad, code, size); +} + +/* + * In an area of size 'sz_with_redz', mark the 'size' first bytes as valid, + * and the rest as invalid. There are generally two use cases: + * + * o kasan_mark(addr, origsize, size, code), with origsize < size. This marks + * the redzone at the end of the buffer as invalid. If the entire is to be + * marked invalid, origsize will be 0. + * + * o kasan_mark(addr, size, size, 0). This marks the entire buffer as valid. + */ +void +kasan_mark(const void *addr, size_t size, size_t redzsize, uint8_t code) +{ + size_t i, n, redz; + int8_t *shad; + + if ((vm_offset_t)addr >= DMAP_MIN_ADDRESS && + (vm_offset_t)addr < DMAP_MAX_ADDRESS) + return; + + KASSERT((vm_offset_t)addr >= VM_MIN_KERNEL_ADDRESS && + (vm_offset_t)addr < VM_MAX_KERNEL_ADDRESS, + ("%s: invalid address %p", __func__, addr)); + KASSERT((vm_offset_t)addr % KASAN_SHADOW_SCALE == 0, + ("%s: invalid address %p", __func__, addr)); + redz = redzsize - roundup(size, KASAN_SHADOW_SCALE); + KASSERT(redz % KASAN_SHADOW_SCALE == 0, + ("%s: invalid size %zu", __func__, redz)); + shad = (int8_t *)kasan_md_addr_to_shad((uintptr_t)addr); + + /* Chunks of 8 bytes, valid. */ + n = size / KASAN_SHADOW_SCALE; + for (i = 0; i < n; i++) { + *shad++ = 0; + } + + /* Possibly one chunk, mid. */ + if ((size & KASAN_SHADOW_MASK) != 0) { + *shad++ = (size & KASAN_SHADOW_MASK); + } + + /* Chunks of 8 bytes, invalid. */ + n = redz / KASAN_SHADOW_SCALE; + for (i = 0; i < n; i++) { + *shad++ = code; + } +} + +/* -------------------------------------------------------------------------- */ + +#define ADDR_CROSSES_SCALE_BOUNDARY(addr, size) \ + (addr >> KASAN_SHADOW_SCALE_SHIFT) != \ + ((addr + size - 1) >> KASAN_SHADOW_SCALE_SHIFT) + +static __always_inline bool +kasan_shadow_1byte_isvalid(unsigned long addr, uint8_t *code) +{ + int8_t *byte = (int8_t *)kasan_md_addr_to_shad(addr); + int8_t last = (addr & KASAN_SHADOW_MASK) + 1; + + if (__predict_true(*byte == 0 || last <= *byte)) { + return (true); + } + *code = *byte; + return (false); +} + +static __always_inline bool +kasan_shadow_2byte_isvalid(unsigned long addr, uint8_t *code) +{ + int8_t *byte, last; + + if (ADDR_CROSSES_SCALE_BOUNDARY(addr, 2)) { + return (kasan_shadow_1byte_isvalid(addr, code) && + kasan_shadow_1byte_isvalid(addr+1, code)); + } + + byte = (int8_t *)kasan_md_addr_to_shad(addr); + last = ((addr + 1) & KASAN_SHADOW_MASK) + 1; + + if (__predict_true(*byte == 0 || last <= *byte)) { + return (true); + } + *code = *byte; + return (false); +} + +static __always_inline bool +kasan_shadow_4byte_isvalid(unsigned long addr, uint8_t *code) +{ + int8_t *byte, last; + + if (ADDR_CROSSES_SCALE_BOUNDARY(addr, 4)) { + return (kasan_shadow_2byte_isvalid(addr, code) && + kasan_shadow_2byte_isvalid(addr+2, code)); + } + + byte = (int8_t *)kasan_md_addr_to_shad(addr); + last = ((addr + 3) & KASAN_SHADOW_MASK) + 1; + + if (__predict_true(*byte == 0 || last <= *byte)) { + return (true); + } + *code = *byte; + return (false); +} + +static __always_inline bool +kasan_shadow_8byte_isvalid(unsigned long addr, uint8_t *code) +{ + int8_t *byte, last; + + if (ADDR_CROSSES_SCALE_BOUNDARY(addr, 8)) { + return (kasan_shadow_4byte_isvalid(addr, code) && + kasan_shadow_4byte_isvalid(addr+4, code)); + } + + byte = (int8_t *)kasan_md_addr_to_shad(addr); + last = ((addr + 7) & KASAN_SHADOW_MASK) + 1; + + if (__predict_true(*byte == 0 || last <= *byte)) { + return (true); + } + *code = *byte; + return (false); +} + +static __always_inline bool +kasan_shadow_Nbyte_isvalid(unsigned long addr, size_t size, uint8_t *code) +{ + size_t i; + + for (i = 0; i < size; i++) { + if (!kasan_shadow_1byte_isvalid(addr+i, code)) + return (false); + } + + return (true); +} + +static __always_inline void +kasan_shadow_check(unsigned long addr, size_t size, bool write, + unsigned long retaddr) +{ + uint8_t code; + bool valid; + + if (__predict_false(!kasan_enabled)) + return; + if (__predict_false(size == 0)) + return; + if (__predict_false(kasan_md_unsupported(addr))) + return; + if (__predict_false(panicstr != NULL)) + return; + + if (__builtin_constant_p(size)) { + switch (size) { + case 1: + valid = kasan_shadow_1byte_isvalid(addr, &code); + break; + case 2: + valid = kasan_shadow_2byte_isvalid(addr, &code); + break; + case 4: + valid = kasan_shadow_4byte_isvalid(addr, &code); + break; + case 8: + valid = kasan_shadow_8byte_isvalid(addr, &code); + break; + default: + valid = kasan_shadow_Nbyte_isvalid(addr, size, &code); + break; + } + } else { + valid = kasan_shadow_Nbyte_isvalid(addr, size, &code); + } + + if (__predict_false(!valid)) { + kasan_report(addr, size, write, retaddr, code); + } +} + +/* -------------------------------------------------------------------------- */ + +void * +kasan_memcpy(void *dst, const void *src, size_t len) +{ + kasan_shadow_check((unsigned long)src, len, false, __RET_ADDR); + kasan_shadow_check((unsigned long)dst, len, true, __RET_ADDR); + return (__builtin_memcpy(dst, src, len)); +} + +int +kasan_memcmp(const void *b1, const void *b2, size_t len) +{ + kasan_shadow_check((unsigned long)b1, len, false, __RET_ADDR); + kasan_shadow_check((unsigned long)b2, len, false, __RET_ADDR); + return (__builtin_memcmp(b1, b2, len)); +} + +void * +kasan_memset(void *b, int c, size_t len) +{ + kasan_shadow_check((unsigned long)b, len, true, __RET_ADDR); + return (__builtin_memset(b, c, len)); +} + +void * +kasan_memmove(void *dst, const void *src, size_t len) +{ + kasan_shadow_check((unsigned long)src, len, false, __RET_ADDR); + kasan_shadow_check((unsigned long)dst, len, true, __RET_ADDR); + return (__builtin_memmove(dst, src, len)); +} + +size_t +kasan_strlen(const char *str) +{ + const char *s; + + s = str; + while (1) { + kasan_shadow_check((unsigned long)s, 1, false, __RET_ADDR); + if (*s == '\0') + break; + s++; + } + + return (s - str); +} + +char * +kasan_strcpy(char *dst, const char *src) +{ + char *save = dst; + + while (1) { + kasan_shadow_check((unsigned long)src, 1, false, __RET_ADDR); + kasan_shadow_check((unsigned long)dst, 1, true, __RET_ADDR); + *dst = *src; + if (*src == '\0') + break; + src++, dst++; + } + + return save; +} + +int +kasan_strcmp(const char *s1, const char *s2) +{ + while (1) { + kasan_shadow_check((unsigned long)s1, 1, false, __RET_ADDR); + kasan_shadow_check((unsigned long)s2, 1, false, __RET_ADDR); + if (*s1 != *s2) + break; + if (*s1 == '\0') + return 0; + s1++, s2++; + } + + return (*(const unsigned char *)s1 - *(const unsigned char *)s2); +} + +int +kasan_copyin(const void *uaddr, void *kaddr, size_t len) +{ + kasan_shadow_check((unsigned long)kaddr, len, true, __RET_ADDR); + return (copyin(uaddr, kaddr, len)); +} + +int +kasan_copyinstr(const void *uaddr, void *kaddr, size_t len, size_t *done) +{ + kasan_shadow_check((unsigned long)kaddr, len, true, __RET_ADDR); + return (copyinstr(uaddr, kaddr, len, done)); +} + +int +kasan_copyout(const void *kaddr, void *uaddr, size_t len) +{ + kasan_shadow_check((unsigned long)kaddr, len, false, __RET_ADDR); + return (copyout(kaddr, uaddr, len)); +} + +/* -------------------------------------------------------------------------- */ + +#include <machine/atomic.h> +#define ATOMIC_SAN_PREFIX kasan +#include <sys/atomic_san.h> + +#define _ASAN_ATOMIC_FUNC_ADD(name, type) \ + void kasan_atomic_add_##name(volatile type *ptr, type val) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + atomic_add_##name(ptr, val); \ + } + +#define ASAN_ATOMIC_FUNC_ADD(name, type) \ + _ASAN_ATOMIC_FUNC_ADD(name, type) \ + _ASAN_ATOMIC_FUNC_ADD(acq_##name, type) \ + _ASAN_ATOMIC_FUNC_ADD(rel_##name, type) + +#define _ASAN_ATOMIC_FUNC_SUBTRACT(name, type) \ + void kasan_atomic_subtract_##name(volatile type *ptr, type val) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + atomic_subtract_##name(ptr, val); \ + } + +#define ASAN_ATOMIC_FUNC_SUBTRACT(name, type) \ + _ASAN_ATOMIC_FUNC_SUBTRACT(name, type) \ + _ASAN_ATOMIC_FUNC_SUBTRACT(acq_##name, type) \ + _ASAN_ATOMIC_FUNC_SUBTRACT(rel_##name, type) + +#define _ASAN_ATOMIC_FUNC_SET(name, type) \ + void kasan_atomic_set_##name(volatile type *ptr, type val) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + atomic_set_##name(ptr, val); \ + } + +#define ASAN_ATOMIC_FUNC_SET(name, type) \ + _ASAN_ATOMIC_FUNC_SET(name, type) \ + _ASAN_ATOMIC_FUNC_SET(acq_##name, type) \ + _ASAN_ATOMIC_FUNC_SET(rel_##name, type) + +#define _ASAN_ATOMIC_FUNC_CLEAR(name, type) \ + void kasan_atomic_clear_##name(volatile type *ptr, type val) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + atomic_clear_##name(ptr, val); \ + } + +#define ASAN_ATOMIC_FUNC_CLEAR(name, type) \ + _ASAN_ATOMIC_FUNC_CLEAR(name, type) \ + _ASAN_ATOMIC_FUNC_CLEAR(acq_##name, type) \ + _ASAN_ATOMIC_FUNC_CLEAR(rel_##name, type) + +#define ASAN_ATOMIC_FUNC_FETCHADD(name, type) \ + type kasan_atomic_fetchadd_##name(volatile type *ptr, type val) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_fetchadd_##name(ptr, val)); \ + } + +#define ASAN_ATOMIC_FUNC_READANDCLEAR(name, type) \ + type kasan_atomic_readandclear_##name(volatile type *ptr) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_readandclear_##name(ptr)); \ + } + +#define ASAN_ATOMIC_FUNC_TESTANDCLEAR(name, type) \ + int kasan_atomic_testandclear_##name(volatile type *ptr, u_int v) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_testandclear_##name(ptr, v)); \ + } + +#define ASAN_ATOMIC_FUNC_TESTANDSET(name, type) \ + int kasan_atomic_testandset_##name(volatile type *ptr, u_int v) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_testandset_##name(ptr, v)); \ + } + +#define ASAN_ATOMIC_FUNC_SWAP(name, type) \ + type kasan_atomic_swap_##name(volatile type *ptr, type val) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_swap_##name(ptr, val)); \ + } + +#define _ASAN_ATOMIC_FUNC_CMPSET(name, type) \ + int kasan_atomic_cmpset_##name(volatile type *ptr, type oval, \ + type nval) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_cmpset_##name(ptr, oval, nval)); \ + } + +#define ASAN_ATOMIC_FUNC_CMPSET(name, type) \ + _ASAN_ATOMIC_FUNC_CMPSET(name, type) \ + _ASAN_ATOMIC_FUNC_CMPSET(acq_##name, type) \ + _ASAN_ATOMIC_FUNC_CMPSET(rel_##name, type) + +#define _ASAN_ATOMIC_FUNC_FCMPSET(name, type) \ + int kasan_atomic_fcmpset_##name(volatile type *ptr, type *oval, \ + type nval) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_fcmpset_##name(ptr, oval, nval)); \ + } + +#define ASAN_ATOMIC_FUNC_FCMPSET(name, type) \ + _ASAN_ATOMIC_FUNC_FCMPSET(name, type) \ + _ASAN_ATOMIC_FUNC_FCMPSET(acq_##name, type) \ + _ASAN_ATOMIC_FUNC_FCMPSET(rel_##name, type) + +#define ASAN_ATOMIC_FUNC_THREAD_FENCE(name) \ + void kasan_atomic_thread_fence_##name(void) \ + { \ + atomic_thread_fence_##name(); \ + } + +#define _ASAN_ATOMIC_FUNC_LOAD(name, type) \ + type kasan_atomic_load_##name(volatile type *ptr) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + return (atomic_load_##name(ptr)); \ + } + +#define ASAN_ATOMIC_FUNC_LOAD(name, type) \ + _ASAN_ATOMIC_FUNC_LOAD(name, type) \ + _ASAN_ATOMIC_FUNC_LOAD(acq_##name, type) + +#define _ASAN_ATOMIC_FUNC_STORE(name, type) \ + void kasan_atomic_store_##name(volatile type *ptr, type val) \ + { \ + kasan_shadow_check((uintptr_t)ptr, sizeof(type), true, \ + __RET_ADDR); \ + atomic_store_##name(ptr, val); \ + } + +#define ASAN_ATOMIC_FUNC_STORE(name, type) \ + _ASAN_ATOMIC_FUNC_STORE(name, type) \ + _ASAN_ATOMIC_FUNC_STORE(rel_##name, type) + +ASAN_ATOMIC_FUNC_ADD(8, uint8_t); +ASAN_ATOMIC_FUNC_ADD(16, uint16_t); +ASAN_ATOMIC_FUNC_ADD(32, uint32_t); +ASAN_ATOMIC_FUNC_ADD(64, uint64_t); +ASAN_ATOMIC_FUNC_ADD(int, u_int); +ASAN_ATOMIC_FUNC_ADD(long, u_long); +ASAN_ATOMIC_FUNC_ADD(ptr, uintptr_t); + +ASAN_ATOMIC_FUNC_SUBTRACT(8, uint8_t); +ASAN_ATOMIC_FUNC_SUBTRACT(16, uint16_t); +ASAN_ATOMIC_FUNC_SUBTRACT(32, uint32_t); +ASAN_ATOMIC_FUNC_SUBTRACT(64, uint64_t); +ASAN_ATOMIC_FUNC_SUBTRACT(int, u_int); +ASAN_ATOMIC_FUNC_SUBTRACT(long, u_long); +ASAN_ATOMIC_FUNC_SUBTRACT(ptr, uintptr_t); + +ASAN_ATOMIC_FUNC_SET(8, uint8_t); +ASAN_ATOMIC_FUNC_SET(16, uint16_t); +ASAN_ATOMIC_FUNC_SET(32, uint32_t); +ASAN_ATOMIC_FUNC_SET(64, uint64_t); +ASAN_ATOMIC_FUNC_SET(int, u_int); +ASAN_ATOMIC_FUNC_SET(long, u_long); +ASAN_ATOMIC_FUNC_SET(ptr, uintptr_t); + +ASAN_ATOMIC_FUNC_CLEAR(8, uint8_t); +ASAN_ATOMIC_FUNC_CLEAR(16, uint16_t); +ASAN_ATOMIC_FUNC_CLEAR(32, uint32_t); +ASAN_ATOMIC_FUNC_CLEAR(64, uint64_t); +ASAN_ATOMIC_FUNC_CLEAR(int, u_int); +ASAN_ATOMIC_FUNC_CLEAR(long, u_long); +ASAN_ATOMIC_FUNC_CLEAR(ptr, uintptr_t); + +ASAN_ATOMIC_FUNC_FETCHADD(32, uint32_t); +ASAN_ATOMIC_FUNC_FETCHADD(64, uint64_t); +ASAN_ATOMIC_FUNC_FETCHADD(int, u_int); +ASAN_ATOMIC_FUNC_FETCHADD(long, u_long); + +ASAN_ATOMIC_FUNC_READANDCLEAR(32, uint32_t); +ASAN_ATOMIC_FUNC_READANDCLEAR(64, uint64_t); *** 451 LINES SKIPPED ***
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202111011432.1A1EWtO3021243>