From nobody Wed Jan 7 03:13:46 2026 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4dmCmm65WXz6N7ld for ; Wed, 07 Jan 2026 03:14:00 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "WR4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4dmCml1SyLz3Qyr for ; Wed, 07 Jan 2026 03:13:59 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=freebsd.org (policy=none); spf=pass (mx1.freebsd.org: domain of adrian.chadd@gmail.com designates 209.85.222.175 as permitted sender) smtp.mailfrom=adrian.chadd@gmail.com Received: by mail-qk1-f175.google.com with SMTP id af79cd13be357-8b1bfd4b3deso138809785a.2 for ; Tue, 06 Jan 2026 19:13:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767755638; x=1768360438; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=AsBVUzITvbu1RoZ8cInq95jHNeJLmaMnxl5VitQVt54=; b=eG2zJwd25Zyh1qOP8LZL/hzz7uU7veskmmG2qRd4rZvCKwF5gIoZn1J/8ehZqs0x2H K5G6XfZTR3fdE2IbxWitgXncSA7LycGf1YMpp8c6RJXOkxYayz147PyYHPt4pHm2EU4Y 38QMK2igGM5g4EoRBsPR6/CXCGnt0xTdAOkY5lNOcUTTxzwQYPRiWNpBJTUlhSJU74LX +wasLhmtJ2QPfYSdSroPv1t4CXnNaeWu1JokiAFo4zxrzareNSqhFP47kcPvg70GxaGW YU2OGWAHHbINjppitTG9DzSblN7FA4/mY8fYgR5GBV7IRynp3QPWcCg8cMBq7X5wHe+e i5gg== X-Forwarded-Encrypted: i=1; AJvYcCWMmzMz52JknD73zq9thcCmwrEh0tAyqI6VMo0e8YsdYSAFWzFAqCRprIYgk0i8XBQCUR6/YAkUNHQ/KTFfKl+wAzEX@freebsd.org X-Gm-Message-State: AOJu0Yx0SbJInEbKNqIoDawgrmV4rKV6SsJLUlfixjSkPk2P6Cno/MP1 kQrA7n/VWjoCqOG6RSe6T0c5u8oiMjVJt596BFKtDdHQUhqdXA1ZaPxtVM90bH5FjjthB/MKqWB 6Z74gKOryRW2IePBcxP6YR7RvXfprzJE= X-Gm-Gg: AY/fxX6QMvPRXPftJkejqVQZFlz8AiQEMkt9EBxyI5zsRpM/HeMQxN04aM33YL8mwYs 6LtgAeXtfknSXSOVbGF03ODkUr3HOFvtDRmXgC2ZcCEXPs5pOtYDadiQFCVM6MwZVrL2tiOXPZO xKkCUs50bz079Oc3tiICA2QKHLoH1GItVA1vt0dmxSZZLh3VpNjUd5/nHhHqAPSfMCPEAzXC0fG q0AtCamMZJjx4DeAS+VZTz5F+n3XFbQxeN+V/wRoFuWgTRXZbEScfpqDUXeMskIvi7MRQ5/BruW v/IP3oYRuHYpTm/x2tfDWByUThhbiQ== X-Google-Smtp-Source: AGHT+IG8+A7KpZDB6ESruac6njnsxQ51jP3fX87GWJUUK4iifk6jWY9K2YWS7KPTMGqqna1FH4CsTrLcdwfwekLvPYs= X-Received: by 2002:a05:622a:8c8:b0:4ed:66bd:95ea with SMTP id d75a77b69052e-4ffb4931a71mr17581851cf.29.1767755637793; Tue, 06 Jan 2026 19:13:57 -0800 (PST) List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: dev-commits-src-all@freebsd.org Sender: owner-dev-commits-src-all@FreeBSD.org MIME-Version: 1.0 References: <695c1a7c.859c.12adca29@gitrepo.freebsd.org> In-Reply-To: <695c1a7c.859c.12adca29@gitrepo.freebsd.org> From: Adrian Chadd Date: Tue, 6 Jan 2026 19:13:46 -0800 X-Gm-Features: AQt7F2pS5En4AcWmZ4TmunN6OdkMX0KztKBRD2cFUPIAdmrNxi0NTSyFkNrC4hc Message-ID: Subject: Re: git: d448578b445d - main - linuxkpi: Add To: =?UTF-8?B?SmVhbi1Tw6liYXN0aWVuIFDDqWRyb24=?= Cc: src-committers@freebsd.org, dev-commits-src-all@freebsd.org, dev-commits-src-main@freebsd.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spamd-Bar: -- X-Spamd-Result: default: False [-2.75 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-0.995]; NEURAL_HAM_LONG(-0.97)[-0.972]; NEURAL_HAM_SHORT(-0.89)[-0.887]; FORGED_SENDER(0.30)[adrian@freebsd.org,adrianchadd@gmail.com]; R_SPF_ALLOW(-0.20)[+ip4:209.85.128.0/17]; DMARC_POLICY_SOFTFAIL(0.10)[freebsd.org : SPF not aligned (relaxed), No valid DKIM,none]; MIME_GOOD(-0.10)[text/plain]; ASN(0.00)[asn:15169, ipnet:209.85.128.0/17, country:US]; RCVD_COUNT_ONE(0.00)[1]; MIME_TRACE(0.00)[0:+]; RWL_MAILSPIKE_POSSIBLE(0.00)[209.85.222.175:from]; FREEMAIL_ENVFROM(0.00)[gmail.com]; TO_DN_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[209.85.222.175:from]; R_DKIM_NA(0.00)[]; MLMMJ_DEST(0.00)[dev-commits-src-all@freebsd.org]; TO_MATCH_ENVRCPT_SOME(0.00)[]; FROM_NEQ_ENVFROM(0.00)[adrian@freebsd.org,adrianchadd@gmail.com]; FROM_HAS_DN(0.00)[]; MISSING_XM_UA(0.00)[]; TAGGED_FROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[dev-commits-src-all@freebsd.org]; ARC_NA(0.00)[]; RCPT_COUNT_THREE(0.00)[4] X-Rspamd-Queue-Id: 4dmCml1SyLz3Qyr Hi! This looks like it's at least failing on armv7: https://ci.freebsd.org/job/FreeBSD-main-armv7-build/26773/ 02:56:27 --- all_subdir_linuxkpi --- 02:56:27 --- linux_siphash.o --- 02:56:27 /usr/src/sys/compat/linuxkpi/common/src/linux_siphash.c:425:3: error: call =3D 02:56:27 to undeclared function 'rol32'; ISO C99 and later do not support implicit f=3D 02:56:27 unction declarations [-Werror,-Wimplicit-function-declaration] 02:56:27 425 | HSIPROUND; 02:56:27 | ^ -adrian On Mon, 5 Jan 2026 at 12:11, Jean-S=C3=A9bastien P=C3=A9dron wrote: > > The branch main has been updated by dumbbell: > > URL: https://cgit.FreeBSD.org/src/commit/?id=3Dd448578b445da95806ef9af996= a0db9754daadeb > > commit d448578b445da95806ef9af996a0db9754daadeb > Author: Jean-S=C3=A9bastien P=C3=A9dron > AuthorDate: 2025-09-07 13:43:11 +0000 > Commit: Jean-S=C3=A9bastien P=C3=A9dron > CommitDate: 2026-01-05 19:32:50 +0000 > > linuxkpi: Add > > The file is copied as is from Linux 6.10 as it dual-licensend under t= he > GPLv2 and BSD 3-clause. > > The amdgpu DRM driver started to use it in Linux 6.10. > > Reviewed by: bz, emaste > Sponsored by: The FreeBSD Foundation > Differential Revision: https://reviews.freebsd.org/D54501 > --- > sys/compat/linuxkpi/common/include/linux/siphash.h | 168 +++++++ > sys/compat/linuxkpi/common/src/linux_siphash.c | 546 +++++++++++++++= ++++++ > sys/conf/files | 2 + > sys/modules/linuxkpi/Makefile | 1 + > 4 files changed, 717 insertions(+) > > diff --git a/sys/compat/linuxkpi/common/include/linux/siphash.h b/sys/com= pat/linuxkpi/common/include/linux/siphash.h > new file mode 100644 > index 000000000000..9153e77382e1 > --- /dev/null > +++ b/sys/compat/linuxkpi/common/include/linux/siphash.h > @@ -0,0 +1,168 @@ > +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */ > +/* Copyright (C) 2016-2022 Jason A. Donenfeld . All Rig= hts Reserved. > + * > + * SipHash: a fast short-input PRF > + * https://131002.net/siphash/ > + * > + * This implementation is specifically for SipHash2-4 for a secure PRF > + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for > + * hashtables. > + */ > + > +#ifndef _LINUX_SIPHASH_H > +#define _LINUX_SIPHASH_H > + > +#include > +#include > + > +#define SIPHASH_ALIGNMENT __alignof__(u64) > +typedef struct { > + u64 key[2]; > +} siphash_key_t; > + > +#define siphash_aligned_key_t siphash_key_t __aligned(16) > + > +static inline bool siphash_key_is_zero(const siphash_key_t *key) > +{ > + return !(key->key[0] | key->key[1]); > +} > + > +u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t = *key); > +u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_= t *key); > + > +u64 siphash_1u64(const u64 a, const siphash_key_t *key); > +u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); > +u64 siphash_3u64(const u64 a, const u64 b, const u64 c, > + const siphash_key_t *key); > +u64 siphash_4u64(const u64 a, const u64 b, const u64 c, const u64 d, > + const siphash_key_t *key); > +u64 siphash_1u32(const u32 a, const siphash_key_t *key); > +u64 siphash_3u32(const u32 a, const u32 b, const u32 c, > + const siphash_key_t *key); > + > +static inline u64 siphash_2u32(const u32 a, const u32 b, > + const siphash_key_t *key) > +{ > + return siphash_1u64((u64)b << 32 | a, key); > +} > +static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, > + const u32 d, const siphash_key_t *key) > +{ > + return siphash_2u64((u64)b << 32 | a, (u64)d << 32 | c, key); > +} > + > + > +static inline u64 ___siphash_aligned(const __le64 *data, size_t len, > + const siphash_key_t *key) > +{ > + if (__builtin_constant_p(len) && len =3D=3D 4) > + return siphash_1u32(le32_to_cpup((const __le32 *)data), k= ey); > + if (__builtin_constant_p(len) && len =3D=3D 8) > + return siphash_1u64(le64_to_cpu(data[0]), key); > + if (__builtin_constant_p(len) && len =3D=3D 16) > + return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(dat= a[1]), > + key); > + if (__builtin_constant_p(len) && len =3D=3D 24) > + return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(dat= a[1]), > + le64_to_cpu(data[2]), key); > + if (__builtin_constant_p(len) && len =3D=3D 32) > + return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(dat= a[1]), > + le64_to_cpu(data[2]), le64_to_cpu(dat= a[3]), > + key); > + return __siphash_aligned(data, len, key); > +} > + > +/** > + * siphash - compute 64-bit siphash PRF value > + * @data: buffer to hash > + * @size: size of @data > + * @key: the siphash key > + */ > +static inline u64 siphash(const void *data, size_t len, > + const siphash_key_t *key) > +{ > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || > + !IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) > + return __siphash_unaligned(data, len, key); > + return ___siphash_aligned(data, len, key); > +} > + > +#define HSIPHASH_ALIGNMENT __alignof__(unsigned long) > +typedef struct { > + unsigned long key[2]; > +} hsiphash_key_t; > + > +u32 __hsiphash_aligned(const void *data, size_t len, > + const hsiphash_key_t *key); > +u32 __hsiphash_unaligned(const void *data, size_t len, > + const hsiphash_key_t *key); > + > +u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); > +u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); > +u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, > + const hsiphash_key_t *key); > +u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, > + const hsiphash_key_t *key); > + > +static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, > + const hsiphash_key_t *key) > +{ > + if (__builtin_constant_p(len) && len =3D=3D 4) > + return hsiphash_1u32(le32_to_cpu(data[0]), key); > + if (__builtin_constant_p(len) && len =3D=3D 8) > + return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(da= ta[1]), > + key); > + if (__builtin_constant_p(len) && len =3D=3D 12) > + return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(da= ta[1]), > + le32_to_cpu(data[2]), key); > + if (__builtin_constant_p(len) && len =3D=3D 16) > + return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(da= ta[1]), > + le32_to_cpu(data[2]), le32_to_cpu(da= ta[3]), > + key); > + return __hsiphash_aligned(data, len, key); > +} > + > +/** > + * hsiphash - compute 32-bit hsiphash PRF value > + * @data: buffer to hash > + * @size: size of @data > + * @key: the hsiphash key > + */ > +static inline u32 hsiphash(const void *data, size_t len, > + const hsiphash_key_t *key) > +{ > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || > + !IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) > + return __hsiphash_unaligned(data, len, key); > + return ___hsiphash_aligned(data, len, key); > +} > + > +/* > + * These macros expose the raw SipHash and HalfSipHash permutations. > + * Do not use them directly! If you think you have a use for them, > + * be sure to CC the maintainer of this file explaining why. > + */ > + > +#define SIPHASH_PERMUTATION(a, b, c, d) ( \ > + (a) +=3D (b), (b) =3D rol64((b), 13), (b) ^=3D (a), (a) =3D rol64= ((a), 32), \ > + (c) +=3D (d), (d) =3D rol64((d), 16), (d) ^=3D (c), \ > + (a) +=3D (d), (d) =3D rol64((d), 21), (d) ^=3D (a), \ > + (c) +=3D (b), (b) =3D rol64((b), 17), (b) ^=3D (c), (c) =3D rol64= ((c), 32)) > + > +#define SIPHASH_CONST_0 0x736f6d6570736575ULL > +#define SIPHASH_CONST_1 0x646f72616e646f6dULL > +#define SIPHASH_CONST_2 0x6c7967656e657261ULL > +#define SIPHASH_CONST_3 0x7465646279746573ULL > + > +#define HSIPHASH_PERMUTATION(a, b, c, d) ( \ > + (a) +=3D (b), (b) =3D rol32((b), 5), (b) ^=3D (a), (a) =3D rol32(= (a), 16), \ > + (c) +=3D (d), (d) =3D rol32((d), 8), (d) ^=3D (c), \ > + (a) +=3D (d), (d) =3D rol32((d), 7), (d) ^=3D (a), \ > + (c) +=3D (b), (b) =3D rol32((b), 13), (b) ^=3D (c), (c) =3D rol32= ((c), 16)) > + > +#define HSIPHASH_CONST_0 0U > +#define HSIPHASH_CONST_1 0U > +#define HSIPHASH_CONST_2 0x6c796765U > +#define HSIPHASH_CONST_3 0x74656462U > + > +#endif /* _LINUX_SIPHASH_H */ > diff --git a/sys/compat/linuxkpi/common/src/linux_siphash.c b/sys/compat/= linuxkpi/common/src/linux_siphash.c > new file mode 100644 > index 000000000000..b4842a8250e1 > --- /dev/null > +++ b/sys/compat/linuxkpi/common/src/linux_siphash.c > @@ -0,0 +1,546 @@ > +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) > +/* Copyright (C) 2016-2022 Jason A. Donenfeld . All Rig= hts Reserved. > + * > + * SipHash: a fast short-input PRF > + * https://131002.net/siphash/ > + * > + * This implementation is specifically for SipHash2-4 for a secure PRF > + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for > + * hashtables. > + */ > + > +#include > +#include > + > +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG =3D=3D 64 > +#include > +#include > +#endif > + > +#define EXPORT_SYMBOL(name) > + > +#define SIPROUND SIPHASH_PERMUTATION(v0, v1, v2, v3) > + > +#define PREAMBLE(len) \ > + u64 v0 =3D SIPHASH_CONST_0; \ > + u64 v1 =3D SIPHASH_CONST_1; \ > + u64 v2 =3D SIPHASH_CONST_2; \ > + u64 v3 =3D SIPHASH_CONST_3; \ > + u64 b =3D ((u64)(len)) << 56; \ > + v3 ^=3D key->key[1]; \ > + v2 ^=3D key->key[0]; \ > + v1 ^=3D key->key[1]; \ > + v0 ^=3D key->key[0]; > + > +#define POSTAMBLE \ > + v3 ^=3D b; \ > + SIPROUND; \ > + SIPROUND; \ > + v0 ^=3D b; \ > + v2 ^=3D 0xff; \ > + SIPROUND; \ > + SIPROUND; \ > + SIPROUND; \ > + SIPROUND; \ > + return (v0 ^ v1) ^ (v2 ^ v3); > + > +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > +u64 __siphash_aligned(const void *_data, size_t len, const siphash_key_t= *key) > +{ > + const u8 *data =3D _data; > + const u8 *end =3D data + len - (len % sizeof(u64)); > + const u8 left =3D len & (sizeof(u64) - 1); > + u64 m; > + PREAMBLE(len) > + for (; data !=3D end; data +=3D sizeof(u64)) { > + m =3D le64_to_cpup(data); > + v3 ^=3D m; > + SIPROUND; > + SIPROUND; > + v0 ^=3D m; > + } > +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG =3D=3D 64 > + if (left) > + b |=3D le64_to_cpu((__force __le64)(load_unaligned_zeropa= d(data) & > + bytemask_from_count(lef= t))); > +#else > + switch (left) { > + case 7: b |=3D ((u64)end[6]) << 48; fallthrough; > + case 6: b |=3D ((u64)end[5]) << 40; fallthrough; > + case 5: b |=3D ((u64)end[4]) << 32; fallthrough; > + case 4: b |=3D le32_to_cpup(data); break; > + case 3: b |=3D ((u64)end[2]) << 16; fallthrough; > + case 2: b |=3D le16_to_cpup(data); break; > + case 1: b |=3D end[0]; > + } > +#endif > + POSTAMBLE > +} > +EXPORT_SYMBOL(__siphash_aligned); > +#endif > + > +u64 __siphash_unaligned(const void *_data, size_t len, const siphash_key= _t *key) > +{ > + const u8 *data =3D _data; > + const u8 *end =3D data + len - (len % sizeof(u64)); > + const u8 left =3D len & (sizeof(u64) - 1); > + u64 m; > + PREAMBLE(len) > + for (; data !=3D end; data +=3D sizeof(u64)) { > + m =3D get_unaligned_le64(data); > + v3 ^=3D m; > + SIPROUND; > + SIPROUND; > + v0 ^=3D m; > + } > +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG =3D=3D 64 > + if (left) > + b |=3D le64_to_cpu((__force __le64)(load_unaligned_zeropa= d(data) & > + bytemask_from_count(lef= t))); > +#else > + switch (left) { > + case 7: b |=3D ((u64)end[6]) << 48; fallthrough; > + case 6: b |=3D ((u64)end[5]) << 40; fallthrough; > + case 5: b |=3D ((u64)end[4]) << 32; fallthrough; > + case 4: b |=3D get_unaligned_le32(end); break; > + case 3: b |=3D ((u64)end[2]) << 16; fallthrough; > + case 2: b |=3D get_unaligned_le16(end); break; > + case 1: b |=3D end[0]; > + } > +#endif > + POSTAMBLE > +} > +EXPORT_SYMBOL(__siphash_unaligned); > + > +/** > + * siphash_1u64 - compute 64-bit siphash PRF value of a u64 > + * @first: first u64 > + * @key: the siphash key > + */ > +u64 siphash_1u64(const u64 first, const siphash_key_t *key) > +{ > + PREAMBLE(8) > + v3 ^=3D first; > + SIPROUND; > + SIPROUND; > + v0 ^=3D first; > + POSTAMBLE > +} > +EXPORT_SYMBOL(siphash_1u64); > + > +/** > + * siphash_2u64 - compute 64-bit siphash PRF value of 2 u64 > + * @first: first u64 > + * @second: second u64 > + * @key: the siphash key > + */ > +u64 siphash_2u64(const u64 first, const u64 second, const siphash_key_t = *key) > +{ > + PREAMBLE(16) > + v3 ^=3D first; > + SIPROUND; > + SIPROUND; > + v0 ^=3D first; > + v3 ^=3D second; > + SIPROUND; > + SIPROUND; > + v0 ^=3D second; > + POSTAMBLE > +} > +EXPORT_SYMBOL(siphash_2u64); > + > +/** > + * siphash_3u64 - compute 64-bit siphash PRF value of 3 u64 > + * @first: first u64 > + * @second: second u64 > + * @third: third u64 > + * @key: the siphash key > + */ > +u64 siphash_3u64(const u64 first, const u64 second, const u64 third, > + const siphash_key_t *key) > +{ > + PREAMBLE(24) > + v3 ^=3D first; > + SIPROUND; > + SIPROUND; > + v0 ^=3D first; > + v3 ^=3D second; > + SIPROUND; > + SIPROUND; > + v0 ^=3D second; > + v3 ^=3D third; > + SIPROUND; > + SIPROUND; > + v0 ^=3D third; > + POSTAMBLE > +} > +EXPORT_SYMBOL(siphash_3u64); > + > +/** > + * siphash_4u64 - compute 64-bit siphash PRF value of 4 u64 > + * @first: first u64 > + * @second: second u64 > + * @third: third u64 > + * @forth: forth u64 > + * @key: the siphash key > + */ > +u64 siphash_4u64(const u64 first, const u64 second, const u64 third, > + const u64 forth, const siphash_key_t *key) > +{ > + PREAMBLE(32) > + v3 ^=3D first; > + SIPROUND; > + SIPROUND; > + v0 ^=3D first; > + v3 ^=3D second; > + SIPROUND; > + SIPROUND; > + v0 ^=3D second; > + v3 ^=3D third; > + SIPROUND; > + SIPROUND; > + v0 ^=3D third; > + v3 ^=3D forth; > + SIPROUND; > + SIPROUND; > + v0 ^=3D forth; > + POSTAMBLE > +} > +EXPORT_SYMBOL(siphash_4u64); > + > +u64 siphash_1u32(const u32 first, const siphash_key_t *key) > +{ > + PREAMBLE(4) > + b |=3D first; > + POSTAMBLE > +} > +EXPORT_SYMBOL(siphash_1u32); > + > +u64 siphash_3u32(const u32 first, const u32 second, const u32 third, > + const siphash_key_t *key) > +{ > + u64 combined =3D (u64)second << 32 | first; > + PREAMBLE(12) > + v3 ^=3D combined; > + SIPROUND; > + SIPROUND; > + v0 ^=3D combined; > + b |=3D third; > + POSTAMBLE > +} > +EXPORT_SYMBOL(siphash_3u32); > + > +#if BITS_PER_LONG =3D=3D 64 > +/* Note that on 64-bit, we make HalfSipHash1-3 actually be SipHash1-3, f= or > + * performance reasons. On 32-bit, below, we actually implement HalfSipH= ash1-3. > + */ > + > +#define HSIPROUND SIPROUND > +#define HPREAMBLE(len) PREAMBLE(len) > +#define HPOSTAMBLE \ > + v3 ^=3D b; \ > + HSIPROUND; \ > + v0 ^=3D b; \ > + v2 ^=3D 0xff; \ > + HSIPROUND; \ > + HSIPROUND; \ > + HSIPROUND; \ > + return (v0 ^ v1) ^ (v2 ^ v3); > + > +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > +u32 __hsiphash_aligned(const void *_data, size_t len, const hsiphash_key= _t *key) > +{ > + const u8 *data =3D _data; > + const u8 *end =3D data + len - (len % sizeof(u64)); > + const u8 left =3D len & (sizeof(u64) - 1); > + u64 m; > + HPREAMBLE(len) > + for (; data !=3D end; data +=3D sizeof(u64)) { > + m =3D le64_to_cpup(data); > + v3 ^=3D m; > + HSIPROUND; > + v0 ^=3D m; > + } > +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG =3D=3D 64 > + if (left) > + b |=3D le64_to_cpu((__force __le64)(load_unaligned_zeropa= d(data) & > + bytemask_from_count(lef= t))); > +#else > + switch (left) { > + case 7: b |=3D ((u64)end[6]) << 48; fallthrough; > + case 6: b |=3D ((u64)end[5]) << 40; fallthrough; > + case 5: b |=3D ((u64)end[4]) << 32; fallthrough; > + case 4: b |=3D le32_to_cpup(data); break; > + case 3: b |=3D ((u64)end[2]) << 16; fallthrough; > + case 2: b |=3D le16_to_cpup(data); break; > + case 1: b |=3D end[0]; > + } > +#endif > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(__hsiphash_aligned); > +#endif > + > +u32 __hsiphash_unaligned(const void *_data, size_t len, > + const hsiphash_key_t *key) > +{ > + const u8 *data =3D _data; > + const u8 *end =3D data + len - (len % sizeof(u64)); > + const u8 left =3D len & (sizeof(u64) - 1); > + u64 m; > + HPREAMBLE(len) > + for (; data !=3D end; data +=3D sizeof(u64)) { > + m =3D get_unaligned_le64(data); > + v3 ^=3D m; > + HSIPROUND; > + v0 ^=3D m; > + } > +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG =3D=3D 64 > + if (left) > + b |=3D le64_to_cpu((__force __le64)(load_unaligned_zeropa= d(data) & > + bytemask_from_count(lef= t))); > +#else > + switch (left) { > + case 7: b |=3D ((u64)end[6]) << 48; fallthrough; > + case 6: b |=3D ((u64)end[5]) << 40; fallthrough; > + case 5: b |=3D ((u64)end[4]) << 32; fallthrough; > + case 4: b |=3D get_unaligned_le32(end); break; > + case 3: b |=3D ((u64)end[2]) << 16; fallthrough; > + case 2: b |=3D get_unaligned_le16(end); break; > + case 1: b |=3D end[0]; > + } > +#endif > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(__hsiphash_unaligned); > + > +/** > + * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 > + * @first: first u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_1u32(const u32 first, const hsiphash_key_t *key) > +{ > + HPREAMBLE(4) > + b |=3D first; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_1u32); > + > +/** > + * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32 > + * @first: first u32 > + * @second: second u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_= t *key) > +{ > + u64 combined =3D (u64)second << 32 | first; > + HPREAMBLE(8) > + v3 ^=3D combined; > + HSIPROUND; > + v0 ^=3D combined; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_2u32); > + > +/** > + * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32 > + * @first: first u32 > + * @second: second u32 > + * @third: third u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third, > + const hsiphash_key_t *key) > +{ > + u64 combined =3D (u64)second << 32 | first; > + HPREAMBLE(12) > + v3 ^=3D combined; > + HSIPROUND; > + v0 ^=3D combined; > + b |=3D third; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_3u32); > + > +/** > + * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32 > + * @first: first u32 > + * @second: second u32 > + * @third: third u32 > + * @forth: forth u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third, > + const u32 forth, const hsiphash_key_t *key) > +{ > + u64 combined =3D (u64)second << 32 | first; > + HPREAMBLE(16) > + v3 ^=3D combined; > + HSIPROUND; > + v0 ^=3D combined; > + combined =3D (u64)forth << 32 | third; > + v3 ^=3D combined; > + HSIPROUND; > + v0 ^=3D combined; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_4u32); > +#else > +#define HSIPROUND HSIPHASH_PERMUTATION(v0, v1, v2, v3) > + > +#define HPREAMBLE(len) \ > + u32 v0 =3D HSIPHASH_CONST_0; \ > + u32 v1 =3D HSIPHASH_CONST_1; \ > + u32 v2 =3D HSIPHASH_CONST_2; \ > + u32 v3 =3D HSIPHASH_CONST_3; \ > + u32 b =3D ((u32)(len)) << 24; \ > + v3 ^=3D key->key[1]; \ > + v2 ^=3D key->key[0]; \ > + v1 ^=3D key->key[1]; \ > + v0 ^=3D key->key[0]; > + > +#define HPOSTAMBLE \ > + v3 ^=3D b; \ > + HSIPROUND; \ > + v0 ^=3D b; \ > + v2 ^=3D 0xff; \ > + HSIPROUND; \ > + HSIPROUND; \ > + HSIPROUND; \ > + return v1 ^ v3; > + > +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > +u32 __hsiphash_aligned(const void *_data, size_t len, const hsiphash_key= _t *key) > +{ > + const u8 *data =3D _data; > + const u8 *end =3D data + len - (len % sizeof(u32)); > + const u8 left =3D len & (sizeof(u32) - 1); > + u32 m; > + HPREAMBLE(len) > + for (; data !=3D end; data +=3D sizeof(u32)) { > + m =3D le32_to_cpup(data); > + v3 ^=3D m; > + HSIPROUND; > + v0 ^=3D m; > + } > + switch (left) { > + case 3: b |=3D ((u32)end[2]) << 16; fallthrough; > + case 2: b |=3D le16_to_cpup(data); break; > + case 1: b |=3D end[0]; > + } > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(__hsiphash_aligned); > +#endif > + > +u32 __hsiphash_unaligned(const void *_data, size_t len, > + const hsiphash_key_t *key) > +{ > + const u8 *data =3D _data; > + const u8 *end =3D data + len - (len % sizeof(u32)); > + const u8 left =3D len & (sizeof(u32) - 1); > + u32 m; > + HPREAMBLE(len) > + for (; data !=3D end; data +=3D sizeof(u32)) { > + m =3D get_unaligned_le32(data); > + v3 ^=3D m; > + HSIPROUND; > + v0 ^=3D m; > + } > + switch (left) { > + case 3: b |=3D ((u32)end[2]) << 16; fallthrough; > + case 2: b |=3D get_unaligned_le16(end); break; > + case 1: b |=3D end[0]; > + } > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(__hsiphash_unaligned); > + > +/** > + * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 > + * @first: first u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_1u32(const u32 first, const hsiphash_key_t *key) > +{ > + HPREAMBLE(4) > + v3 ^=3D first; > + HSIPROUND; > + v0 ^=3D first; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_1u32); > + > +/** > + * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32 > + * @first: first u32 > + * @second: second u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_= t *key) > +{ > + HPREAMBLE(8) > + v3 ^=3D first; > + HSIPROUND; > + v0 ^=3D first; > + v3 ^=3D second; > + HSIPROUND; > + v0 ^=3D second; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_2u32); > + > +/** > + * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32 > + * @first: first u32 > + * @second: second u32 > + * @third: third u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third, > + const hsiphash_key_t *key) > +{ > + HPREAMBLE(12) > + v3 ^=3D first; > + HSIPROUND; > + v0 ^=3D first; > + v3 ^=3D second; > + HSIPROUND; > + v0 ^=3D second; > + v3 ^=3D third; > + HSIPROUND; > + v0 ^=3D third; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_3u32); > + > +/** > + * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32 > + * @first: first u32 > + * @second: second u32 > + * @third: third u32 > + * @forth: forth u32 > + * @key: the hsiphash key > + */ > +u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third, > + const u32 forth, const hsiphash_key_t *key) > +{ > + HPREAMBLE(16) > + v3 ^=3D first; > + HSIPROUND; > + v0 ^=3D first; > + v3 ^=3D second; > + HSIPROUND; > + v0 ^=3D second; > + v3 ^=3D third; > + HSIPROUND; > + v0 ^=3D third; > + v3 ^=3D forth; > + HSIPROUND; > + v0 ^=3D forth; > + HPOSTAMBLE > +} > +EXPORT_SYMBOL(hsiphash_4u32); > +#endif > diff --git a/sys/conf/files b/sys/conf/files > index 8deb2bd400c0..d0c4ea5f544d 100644 > --- a/sys/conf/files > +++ b/sys/conf/files > @@ -4704,6 +4704,8 @@ compat/linuxkpi/common/src/linux_shmemfs.c o= ptional compat_linuxkpi \ > compile-with "${LINUXKPI_C}" > compat/linuxkpi/common/src/linux_shrinker.c optional compat_linuxkpi = \ > compile-with "${LINUXKPI_C}" > +compat/linuxkpi/common/src/linux_siphash.c optional compat_linuxkpi = \ > + compile-with "${LINUXKPI_C}" > compat/linuxkpi/common/src/linux_skbuff.c optional compat_linuxkpi = \ > compile-with "${LINUXKPI_C}" > compat/linuxkpi/common/src/linux_slab.c optional compat_l= inuxkpi \ > diff --git a/sys/modules/linuxkpi/Makefile b/sys/modules/linuxkpi/Makefil= e > index a662f5dffbb6..c465c76a7626 100644 > --- a/sys/modules/linuxkpi/Makefile > +++ b/sys/modules/linuxkpi/Makefile > @@ -28,6 +28,7 @@ SRCS=3D linux_compat.c \ > linux_shmemfs.c \ > linux_shrinker.c \ > linux_simple_attr.c \ > + linux_siphash.c \ > linux_skbuff.c \ > linux_slab.c \ > linux_tasklet.c \ >