From nobody Tue Oct 14 14:23:51 2025 X-Original-To: dev-ci@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4cmGfw3sLfz6CWSS for ; Tue, 14 Oct 2025 14:23:52 +0000 (UTC) (envelope-from jenkins-admin@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R12" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4cmGfw01SVz3YVd; Tue, 14 Oct 2025 14:23:52 +0000 (UTC) (envelope-from jenkins-admin@FreeBSD.org) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1760451832; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references:list-id; bh=9HE6jkooAy0zI7TZ9o1x2K9fACT/px/8zFrXV50LOhE=; b=XtZzqFFapQozXA7N5sRCvpnATUlvtg26HIUMOigSF4MlUt5DYnBu8DEnNhqwg7RhspV9bB 0u40viPZs50ZrdXQHhJikjDCY8joQBQC9XAGZXtu+q7HGlXcXd56J49PK5qad3Vs3KT28G qud0pLoVbstf5E9OUYAZJKicTGnbTfVe0kyhhC+vSKO/23Z7yA1SubHJCJma/sNf1M4Z+7 owm/N5cd0gR+4IzO02mlPCeTtLsPio0dLwtpVYtGlQfO5GWPaJtdQKHeVjefqZ7M7dHIP5 WvdDUqbnFqErxaro5u0jCU/QoQoci9lxaYlfEBmhV0vzZBIBPCPc3rDHqEIrsg== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1760451832; a=rsa-sha256; cv=none; b=d9B+2xqAwxYAmVvF3uOCre2EqqBcYvaDfnZoDzKo0eQQ5odPzJUzd+az8TAcmNyWJ5XKUP HoO3EAyZ3YoK/UxnuuhwYXGzb93HC0ty7PhasGhfOigNCxD9BHa97/LCUCfJhylcWsLdif Tu2ohlDhY7yf4aQxyxgBRgViCO4RIkT09s+g4dJkBbkZOp1CuVhsPZ98tjsgK857k408k7 agKRcSk8bFs9eREv7Ot7PK2MCw3olD5jNn5q2HuynYiksMSdy+KtgnOJnEX5RFe7lH5uZ4 /b3FU86R0odTtNHbZLgcH0S0Bk39eToTAXSGPRB+fogXVpdqR0xxOaiQ9uw8ng== ARC-Authentication-Results: i=1; mx1.freebsd.org; none Received: from jenkins.ci.freebsd.org (jenkins.ci.freebsd.org [IPv6:2610:1c1:1:607c::16:16]) by mxrelay.nyi.freebsd.org (Postfix) with ESMTP id 4cmGfv6d0lzwTN; Tue, 14 Oct 2025 14:23:51 +0000 (UTC) (envelope-from jenkins-admin@FreeBSD.org) Date: Tue, 14 Oct 2025 14:23:51 +0000 (GMT) From: jenkins-admin@FreeBSD.org To: olce@FreeBSD.org, dev-ci@FreeBSD.org Cc: jenkins-admin@FreeBSD.org Message-ID: <357257060.1301.1760451831934@jenkins.ci.freebsd.org> In-Reply-To: <799639133.1289.1760450041055@jenkins.ci.freebsd.org> References: <799639133.1289.1760450041055@jenkins.ci.freebsd.org> Subject: FreeBSD-main-amd64-test - Build #26989 - Still Failing List-Id: Continuous Integration Build and Test Results List-Archive: https://lists.freebsd.org/archives/dev-ci List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: dev-ci@freebsd.org Sender: owner-dev-ci@FreeBSD.org MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_1300_1240026159.1760451831772" X-Jenkins-Job: FreeBSD-main-amd64-test X-Jenkins-Result: FAILURE List-ID: FreeBSD CI Build Notifications Precedence: bulk ------=_Part_1300_1240026159.1760451831772 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable FreeBSD-main-amd64-test - Build #26989 (2110ae0ef9d6ca8cf52b29fcaf926c4343f= 56826) - Still Failing Build information: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/26989= / Full change log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/26989/c= hanges Full build log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/26989/co= nsole Status explanation: "Failure" - the build is suspected being broken by the following changes "Still Failing" - the build has not been fixed by the following changes and this is a notification to note that these changes have not been fully tested by the CI system Change summaries: (Those commits are likely but not certainly responsible) bda3b61512b2597d4c77d2b9c9074b844dec0405 by olce: sys/rpc: UNIX auth: Rename 'ngroups' =3D> 'supp_ngroups' for clarity 47e9c81d4f1324674c624df02a51ad3a72aa7444 by olce: sys/rpc: UNIX auth: Fix OOB accesses, notably writes on decode f7c4f800cc0b4fac1c99cda8e22d46b67592f9fa by olce: sys/rpc: Define AUTH_SYS_MAX_{GROUPS,HOSTNAME} b119ef0f6a81eb32b0e1cd0075cec499543e7ddd by olce: sys/rpc: UNIX auth: Use AUTH_SYS_MAX_{GROUPS,HOSTNAME} as limits (1/2) e665c0f6f7a611d25d9d7e7f64d98c84b3a92820 by olce: sys/rpc: UNIX auth: Use AUTH_SYS_MAX_{GROUPS,HOSTNAME} as limits (2/2) d4cc791f3b2e1b6926420649a481eacaf3bf268e by olce: sys/rpc: UNIX auth: Fix OOB reads on too short message 4ae70c3ea498e06676040ee99254d261e29ae82e by olce: sys/rpc: UNIX auth: Support XDR_FREE a4105a5d4e179aa1ef661ee45d6008e83fefd2a7 by olce: sys/rpc: UNIX auth: Style: Remove unnecessary headers, minor changes 2110ae0ef9d6ca8cf52b29fcaf926c4343f56826 by olce: sys/rpc: UNIX auth: Do not log on bogus AUTH_SYS messages The end of the build log: [...truncated 3.85 MiB...] epair0a: link state changed to DOWN epair0b: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN passed [0.268s] sys/netinet/multicast:IP_ADD_MEMBERSHIP_ip_mreqn -> epair0a: Ethernet add= ress: 58:9c:fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair1a: Ethernet address: 58:9c:fc:10:00:b9 epair1b: Ethernet address: 58:9c:fc:10:24:51 epair1a: link state changed to UP epair1b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN passed [0.234s] sys/netinet/multicast:MCAST_JOIN_GROUP -> epair0a: Ethernet address: 58:9= c:fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair1a: Ethernet address: 58:9c:fc:10:00:b9 epair1b: Ethernet address: 58:9c:fc:10:24:51 epair1a: link state changed to UP epair1b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN passed [0.244s] sys/netinet/output:output_raw_flowid_mpath_success -> epair0a: Ethernet a= ddress: 58:9c:fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair1a: Ethernet address: 58:9c:fc:10:00:b9 epair1b: Ethernet address: 58:9c:fc:10:24:51 epair1a: link state changed to UP epair1b: link state changed to UP lo1: link state changed to UP lo2: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN passed [0.629s] sys/netinet/output:output_raw_success -> epair0a: Ethernet address: 58:9c= :fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [0.184s] sys/netinet/output:output_tcp_flowid_mpath_success -> epair0a: Ethernet a= ddress: 58:9c:fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair1a: Ethernet address: 58:9c:fc:10:00:b9 epair1b: Ethernet address: 58:9c:fc:10:24:51 epair1a: link state changed to UP epair1b: link state changed to UP lo1: link state changed to UP lo2: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN passed [2.550s] sys/netinet/output:output_tcp_setup_success -> epair0a: Ethernet address:= 58:9c:fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [0.266s] sys/netinet/output:output_udp_flowid_mpath_success -> epair0a: Ethernet a= ddress: 58:9c:fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair1a: Ethernet address: 58:9c:fc:10:00:b9 epair1b: Ethernet address: 58:9c:fc:10:24:51 epair1a: link state changed to UP epair1b: link state changed to UP lo1: link state changed to UP lo2: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN passed [8.415s] sys/netinet/output:output_udp_setup_success -> epair0a: Ethernet address:= 58:9c:fc:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [1.312s] sys/netinet/redirect:valid_redirect -> epair0a: Ethernet address: 58:9c:f= c:10:77:9e epair0b: Ethernet address: 58:9c:fc:10:a8:71 epair0a: link state changed to UP epair0b: link state changed to UP epair0a: promiscuous mode enabled epair0a: promiscuous mode disabled epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [1.118s] sys/netinet/so_reuseport_lb_test:basic_ipv4 -> passed [0.848s] sys/netinet/so_reuseport_lb_test:basic_ipv6 -> Limiting tcp reset respons= e from 18618 to 209 packets/sec passed [0.836s] sys/netinet/so_reuseport_lb_test:bind_without_listen -> passed [0.006s] sys/netinet/so_reuseport_lb_test:concurrent_add -> passed [2.596s] sys/netinet/so_reuseport_lb_test:connect_bound -> passed [0.005s] sys/netinet/so_reuseport_lb_test:connect_not_bound -> passed [0.004s] sys/netinet/so_reuseport_lb_test:double_listen_ipv4 -> passed [0.004s] sys/netinet/so_reuseport_lb_test:double_listen_ipv6 -> passed [0.004s] sys/netinet/socket_afinet:socket_afinet -> passed [0.004s] sys/netinet/socket_afinet:socket_afinet_bind_connected_port -> passed [0= .007s] sys/netinet/socket_afinet:socket_afinet_bind_ok -> passed [0.004s] sys/netinet/socket_afinet:socket_afinet_bind_zero -> skipped: doesn't wor= k when mac_portacl(4) loaded (https://bugs.freebsd.org/238781) [0.004s] sys/netinet/socket_afinet:socket_afinet_bindany -> passed [0.005s] sys/netinet/socket_afinet:socket_afinet_multibind -> passed [0.161s] sys/netinet/socket_afinet:socket_afinet_poll_no_rdhup -> passed [0.005s] sys/netinet/socket_afinet:socket_afinet_poll_rdhup -> passed [0.005s] sys/netinet/socket_afinet:socket_afinet_stream_reconnect -> passed [0.00= 4s] sys/netinet/tcp_connect_port_test:basic_ipv4 -> Limiting tcp reset respon= se from 13581 to 198 packets/sec Limiting tcp reset response from 17026 to 208 packets/sec Limiting tcp reset response from 15056 to 187 packets/sec Limiting tcp reset response from 14951 to 200 packets/sec Limiting tcp reset response from 15028 to 209 packets/sec passed [4.187s] sys/netinet/tcp_connect_port_test:basic_ipv6 -> Limiting tcp reset respon= se from 18662 to 191 packets/sec Limiting tcp reset response from 17057 to 184 packets/sec Limiting tcp reset response from 13612 to 197 packets/sec Limiting tcp reset response from 13564 to 189 packets/sec passed [4.180s] TCP HPTS started 2 ((unbounded)) swi interrupt threads sys/netinet/tcp_hpts_test.py:TestTcpHpts::test_concurrent_operations -> f= ailed: /usr/tests/atf_python/sys/netlink/netlink.py:376: ValueError [0.458= s] sys/netinet/tcp_hpts_test.py:TestTcpHpts::test_cpu_assignment -> passed = [0.335s] sys/netinet/tcp_hpts_test.py:TestTcpHpts::test_deferred_requests -> faile= d: /usr/tests/atf_python/sys/netlink/netlink.py:376: ValueError [0.474s] sys/netinet/tcp_hpts_test.py:TestTcpHpts::test_direct_wake_mechanism -> S= leeping on "-" with the following non-sleepable locks held: exclusive sleep mutex tcp_hpts_lck (hpts) r =3D 0 (0xfffff80147c8c100) lock= ed @ /usr/src/sys/netinet/tcp_hpts_test.c:1467 stack backtrace: #0 0xffffffff80c061dc at witness_debugger+0x6c #1 0xffffffff80c073f0 at witness_warn+0x430 #2 0xffffffff80b9a928 at _sleep+0x58 #3 0xffffffff80bf8e91 at taskqueue_thread_loop+0xc1 #4 0xffffffff80b3e472 at fork_exit+0x82 #5 0xffffffff810b45ce at fork_trampoline+0xe Sleeping on "-" with the following non-sleepable locks held: exclusive sleep mutex tcp_hpts_lck (hpts) r =3D 0 (0xfffff80147c8c100) lock= ed @ /usr/src/sys/netinet/tcp_hpts_test.c:1467 stack backtrace: #0 0xffffffff80c061dc at witness_debugger+0x6c #1 0xffffffff80c073f0 at witness_warn+0x430 #2 0xffffffff80b9a928 at _sleep+0x58 #3 0xffffffff80bf8e91 at taskqueue_thread_loop+0xc1 #4 0xffffffff80b3e472 at fork_exit+0x82 #5 0xffffffff810b45ce at fork_trampoline+0xe Sleeping thread (tid 119997, pid 0) owns a non-sleepable lock KDB: stack backtrace of thread 119997: sched_switch() at sched_switch+0x5e2/frame 0xfffffe008beb5dc0 mi_switch() at mi_switch+0x172/frame 0xfffffe008beb5de0 sleepq_switch() at sleepq_switch+0x109/frame 0xfffffe008beb5e20 _sleep() at _sleep+0x2a5/frame 0xfffffe008beb5ec0 taskqueue_thread_loop() at taskqueue_thread_loop+0xc1/frame 0xfffffe008beb5= ef0 fork_exit() at fork_exit+0x82/frame 0xfffffe008beb5f30 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe008beb5f30 --- trap 0x3, rip =3D 0x1950a9194210, rsp =3D 0x1950a9194300, rbp =3D 0x195= 0a9126bf8 --- panic: sleeping thread holds tcp_hpts_lck cpuid =3D 1 time =3D 1760451830 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe008bf0f= b20 vpanic() at vpanic+0x136/frame 0xfffffe008bf0fc50 panic() at panic+0x43/frame 0xfffffe008bf0fcb0 propagate_priority() at propagate_priority+0x2a6/frame 0xfffffe008bf0fcf0 turnstile_wait() at turnstile_wait+0x399/frame 0xfffffe008bf0fd30 __mtx_lock_sleep() at __mtx_lock_sleep+0x1c1/frame 0xfffffe008bf0fdc0 __mtx_lock_flags() at __mtx_lock_flags+0xdd/frame 0xfffffe008bf0fe10 tcp_hpts_thread() at tcp_hpts_thread+0x2a/frame 0xfffffe008bf0fe60 ithread_loop() at ithread_loop+0x266/frame 0xfffffe008bf0fef0 fork_exit() at fork_exit+0x82/frame 0xfffffe008bf0ff30 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe008bf0ff30 --- trap 0, rip =3D 0, rsp =3D 0, rbp =3D 0 --- KDB: enter: panic [ thread pid 12 tid 119999 ] Stopped at kdb_enter+0x33: movq $0,0x121d0a2(%rip) db:0:kdb.enter.panic> show pcpu cpuid =3D 1 dynamic pcpu =3D 0xfffffe008d1c25c0 curthread =3D 0xfffff8005f47b780: pid 12 tid 119999 critnest 1 "swi1: hp= ts" curpcb =3D 0xfffff8005f47bcd0 fpcurthread =3D none idlethread =3D 0xfffff8000460b000: tid 100004 "idle: cpu1" self =3D 0xffffffff82611000 curpmap =3D 0xffffffff81d9f520 tssp =3D 0xffffffff82611384 rsp0 =3D 0xfffffe008bf10000 kcr3 =3D 0x800000000259b002 ucr3 =3D 0xffffffffffffffff scr3 =3D 0x22c0688ea gs32p =3D 0xffffffff82611404 ldt =3D 0xffffffff82611444 tss =3D 0xffffffff82611434 curvnet =3D 0 spin locks held: db:0:kdb.enter.panic> reset Uptime: 28m40s + rc=3D0 + echo 'bhyve return code =3D 0' bhyve return code =3D 0 + sudo /usr/sbin/bhyvectl '--vm=3Dtestvm-main-amd64-26989' --destroy + sh -ex freebsd-ci/scripts/test/extract-meta.sh + METAOUTDIR=3Dmeta-out + rm -fr meta-out + mkdir meta-out + tar xvf meta.tar -C meta-out x ./ x ./auto-shutdown x ./disable-dtrace-tests.sh x ./run.sh x ./run-kyua.sh x ./disable-zfs-tests.sh x ./disable-notyet-tests.sh + rm -f 'test-report.*' + mv 'meta-out/test-report.*' . mv: rename meta-out/test-report.* to ./test-report.*: No such file or direc= tory + report=3Dtest-report.xml + [ -e freebsd-ci/jobs/FreeBSD-main-amd64-test/xfail-list -a -e test-report= .xml ] + rm -f disk-cam + jot 5 + rm -f disk1 + rm -f disk2 + rm -f disk3 + rm -f disk4 + rm -f disk5 + rm -f disk-test.img [PostBuildScript] - [INFO] Executing post build scripts. [FreeBSD-main-amd64-test] $ /bin/sh -xe /tmp/jenkins6144274288552608960.sh + ./freebsd-ci/artifact/post-link.py Post link: {'job_name': 'FreeBSD-main-amd64-test', 'commit': '2110ae0ef9d6c= a8cf52b29fcaf926c4343f56826', 'branch': 'main', 'target': 'amd64', 'target_= arch': 'amd64', 'link_type': 'latest_tested'} "Link created: main/latest_tested/amd64/amd64 -> ../../2110ae0ef9d6ca8cf52b= 29fcaf926c4343f56826/amd64/amd64\n" Recording test results ERROR: Step =E2=80=98Publish JUnit test result report=E2=80=99 failed: No t= est report files were found. Configuration error? Checking for post-build Performing post-build step Checking if email needs to be generated Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Sending mail from default account using System Admin e-mail address ------=_Part_1300_1240026159.1760451831772--