Date: Sat, 7 Feb 2026 02:37:24 +0000 (GMT) From: jenkins-admin@FreeBSD.org To: np@FreeBSD.org, dev-ci@FreeBSD.org Cc: jenkins-admin@FreeBSD.org Subject: FreeBSD-main-amd64-test - Build #27835 - Failure Message-ID: <427961256.1739.1770431844673@jenkins.ci.freebsd.org> In-Reply-To: <29711544.1711.1770410272195@jenkins.ci.freebsd.org>
index | next in thread | previous in thread | raw e-mail
FreeBSD-main-amd64-test - Build #27835 (9352d2f6dd55afcf0ac24d2806da7c6febf19589) - Failure Build information: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/27835/ Full change log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/27835/changes Full build log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/27835/console Status explanation: "Failure" - the build is suspected being broken by the following changes "Still Failing" - the build has not been fixed by the following changes and this is a notification to note that these changes have not been fully tested by the CI system Change summaries: (Those commits are likely but not certainly responsible) 9352d2f6dd55afcf0ac24d2806da7c6febf19589 by np: cxgbe(4): sysctl to disable/enable the TCB cache The end of the build log: [...truncated 4.17 MiB...] hardcoded lock order "in6_multi_list_mtx"(sleep mutex) -> "mld_mtx"(sleep mutex) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80ba19c1 at __mtx_lock_flags+0x91 #2 0xffffffff80e28ec1 at mld_change_state+0x81 #3 0xffffffff80e1586e at in6_joingroup_locked+0x3ae #4 0xffffffff80e15494 at in6_joingroup+0x44 #5 0xffffffff80e0ccea at in6_update_ifa+0x102a #6 0xffffffff80e12ef2 at in6_ifattach+0x132 #7 0xffffffff80e0ebbf at in6_if_up+0x7f #8 0xffffffff80d0f386 at if_up+0xd6 #9 0xffffffff80d105a6 at ifhwioctl+0xdc6 #10 0xffffffff80d12075 at ifioctl+0x965 #11 0xffffffff80c4a3f1 at kern_ioctl+0x2a1 #12 0xffffffff80c4a0ef at sys_ioctl+0x12f #13 0xffffffff811289e9 at amd64_syscall+0x169 #14 0xffffffff810f7f9b at fast_syscall_common+0xf8 lock order "mld_mtx"(sleep mutex) -> "if_ovpn_lock"(rm) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80bc22f9 at _rm_rlock_debug+0x129 #2 0xffffffff82ee7e37 at ovpn_output+0x57 #3 0xffffffff80e24ea6 at ip6_output+0x19c6 #4 0xffffffff80e2b165 at mld_dispatch_packet+0x325 #5 0xffffffff80e2bc80 at mld_fasttimo+0x520 #6 0xffffffff80be742b at softclock_call_cc+0x19b #7 0xffffffff80be8b66 at softclock_thread+0xc6 #8 0xffffffff80b789b2 at fork_exit+0x82 #9 0xffffffff810f86ce at fork_trampoline+0xe lock order "if_ovpn_lock"(rm) -> "so_snd"(sleep mutex) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80ba19c1 at __mtx_lock_flags+0x91 #2 0xffffffff82eeb67e at ovpn_peer_release_ref+0x20e #3 0xffffffff82ee7b32 at ovpn_clone_destroy+0x92 #4 0xffffffff80d16669 at if_clone_destroyif_flags+0x69 #5 0xffffffff80d16f56 at if_clone_detach+0xe6 #6 0xffffffff82eed665 at ovpn_prison_remove+0x55 #7 0xffffffff80ba57e2 at osd_call+0xb2 #8 0xffffffff80b829ec at prison_deref+0x5dc #9 0xffffffff80b844e7 at sys_jail_remove+0x1a7 #10 0xffffffff81128cd1 at amd64_syscall+0x451 #11 0xffffffff810f7f9b at fast_syscall_common+0xf8 hardcoded lock order "so_snd"(sleep mutex) -> "so_rcv"(sleep mutex) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80ba19c1 at __mtx_lock_flags+0x91 #2 0xffffffff80c7f5cf at soreserve+0x5f #3 0xffffffff80e3db95 at udp6_attach+0x75 #4 0xffffffff80c82893 at soattach+0xd3 #5 0xffffffff80c8216c at socreate+0x19c #6 0xffffffff80c8bfbe at kern_socket+0xbe #7 0xffffffff811289e9 at amd64_syscall+0x169 #8 0xffffffff810f7f9b at fast_syscall_common+0xf8 lock order "if_ovpn_lock"(rm) -> "so_rcv"(sleep mutex) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80ba19c1 at __mtx_lock_flags+0x91 #2 0xffffffff82eeb653 at ovpn_peer_release_ref+0x1e3 #3 0xffffffff82ee7b32 at ovpn_clone_destroy+0x92 #4 0xffffffff80d16669 at if_clone_destroyif_flags+0x69 #5 0xffffffff80d16f56 at if_clone_detach+0xe6 #6 0xffffffff82eed665 at ovpn_prison_remove+0x55 #7 0xffffffff80ba57e2 at osd_call+0xb2 #8 0xffffffff80b829ec at prison_deref+0x5dc #9 0xffffffff80b844e7 at sys_jail_remove+0x1a7 #10 0xffffffff81128cd1 at amd64_syscall+0x451 #11 0xffffffff810f7f9b at fast_syscall_common+0xf8 lock order "so_rcv"(sleep mutex) -> "tcphash"(sleep mutex) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80ba19c1 at __mtx_lock_flags+0x91 #2 0xffffffff80dfa76b at tcp6_usr_listen+0xfb #3 0xffffffff80c83a21 at solisten+0x41 #4 0xffffffff80c8c4bf at kern_listen+0x6f #5 0xffffffff81128cd1 at amd64_syscall+0x451 #6 0xffffffff810f7f9b at fast_syscall_common+0xf8 hardcoded lock order "tcphash"(sleep mutex) -> "in6_ifaddr_lock"(rm) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80bc22f9 at _rm_rlock_debug+0x129 #2 0xffffffff80e1cb87 at in6_selectsrc+0x3f7 #3 0xffffffff80e1c72d at in6_selectsrc_socket+0x6d #4 0xffffffff80e19ad1 at in6_pcbconnect+0x291 #5 0xffffffff80dfc8fa at tcp6_connect+0xba #6 0xffffffff80dfa275 at tcp6_usr_connect+0x2f5 #7 0xffffffff80c84a00 at soconnectat+0xc0 #8 0xffffffff80c8ccd1 at kern_connectat+0xe1 #9 0xffffffff80c8cbc1 at sys_connect+0x81 #10 0xffffffff81128cd1 at amd64_syscall+0x451 #11 0xffffffff810f7f9b at fast_syscall_common+0xf8 lock order "in6_ifaddr_lock"(rm) -> "lle"(rw) first seen at: #0 0xffffffff80c42bd4 at witness_checkorder+0x364 #1 0xffffffff80bc44bd at __rw_rlock_int+0x7d #2 0xffffffff80e0f24b at in6_lltable_lookup+0x10b #3 0xffffffff80e2dd01 at nd6_lookup+0x81 #4 0xffffffff80e389ce at find_pfxlist_reachable_router+0x7e #5 0xffffffff80e365c6 at pfxlist_onlink_check+0x3c6 #6 0xffffffff80e35cf2 at nd6_ra_input+0x1112 #7 0xffffffff80e063f0 at icmp6_input+0x5b0 #8 0xffffffff80e20d46 at ip6_input+0xbb6 #9 0xffffffff80d36a44 at netisr_dispatch_src+0xb4 #10 0xffffffff80d188ca at ether_demux+0x16a #11 0xffffffff80d19e3e at ether_nh_input+0x3ce #12 0xffffffff80d36a44 at netisr_dispatch_src+0xb4 #13 0xffffffff80d18d15 at ether_input+0xd5 #14 0xffffffff82ee1b54 at epair_tx_start_deferred+0xd4 #15 0xffffffff80c35612 at taskqueue_run_locked+0x1c2 #16 0xffffffff80c36503 at taskqueue_thread_loop+0xd3 #17 0xffffffff80b789b2 at fork_exit+0x82 lock order lle -> in6_multi_list_mtx attempted at: #0 0xffffffff80c4351f at witness_checkorder+0xcaf #1 0xffffffff80ba19c1 at __mtx_lock_flags+0x91 #2 0xffffffff80e0f691 at in6_lltable_delete_entry+0xc1 #3 0xffffffff80d1c3ba at lltable_delete_addr+0xda #4 0xffffffff80e5fa2b at rtnl_handle_delneigh+0x7b #5 0xffffffff80e5c2c2 at rtnl_handle_message+0x132 #6 0xffffffff80e5a54d at nl_taskqueue_handler+0x52d #7 0xffffffff80c35612 at taskqueue_run_locked+0x1c2 #8 0xffffffff80c36503 at taskqueue_thread_loop+0xd3 #9 0xffffffff80b789b2 at fork_exit+0x82 #10 0xffffffff810f86ce at fork_trampoline+0xe lock order reversal: (sleepable after non-sleepable) 1st 0xfffff801f382cd10 lle (lle, rw) @ /usr/src/sys/netinet6/in6.c:2463 2nd 0xffffffff822fbdb0 in6_multi_sx (in6_multi_sx, sx) @ /usr/src/sys/netinet6/in6_mcast.c:1329 lock order lle -> in6_multi_sx attempted at: #0 0xffffffff80c4351f at witness_checkorder+0xcaf #1 0xffffffff80bd3580 at _sx_xlock+0x60 #2 0xffffffff80e15c97 at in6_leavegroup+0x27 #3 0xffffffff80e0f6dc at in6_lltable_delete_entry+0x10c #4 0xffffffff80d1c3ba at lltable_delete_addr+0xda #5 0xffffffff80e5fa2b at rtnl_handle_delneigh+0x7b #6 0xffffffff80e5c2c2 at rtnl_handle_message+0x132 #7 0xffffffff80e5a54d at nl_taskqueue_handler+0x52d #8 0xffffffff80c35612 at taskqueue_run_locked+0x1c2 #9 0xffffffff80c36503 at taskqueue_thread_loop+0xd3 #10 0xffffffff80b789b2 at fork_exit+0x82 #11 0xffffffff810f86ce at fork_trampoline+0xe epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [5.165s] sys/netinet6/proxy_ndp:pndp_ifdestroy_success -> epair0a: Ethernet address: 06:92:99:f3:42:44 epair0b: Ethernet address: b2:1c:aa:db:49:78 epair0a: link state changed to UP epair0b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [1.551s] sys/netinet6/proxy_ndp:pndp_neighbor_advert -> epair0a: Ethernet address: b2:a7:37:ec:66:a6 epair0b: Ethernet address: ea:7b:8a:12:8d:6f epair0a: link state changed to UP epair0b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [4.420s] sys/netinet6/redirect:valid_redirect -> epair0a: Ethernet address: be:2d:0c:af:83:af epair0b: Ethernet address: b2:5d:ca:24:8e:f6 epair0a: link state changed to UP epair0b: link state changed to UP epair0a: promiscuous mode enabled epair0a: promiscuous mode disabled epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [3.198s] sys/netinet6/scapyi386:scapyi386 -> epair0a: Ethernet address: ee:bb:68:a6:57:35 epair0b: Ethernet address: 8e:fc:fc:58:67:21 epair0a: link state changed to UP epair0b: link state changed to UP epair0a: promiscuous mode enabled epair0a: promiscuous mode disabled epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [4.624s] sys/netinet6/test_ip6_output.py:TestIP6Output::test_output6_base -> broken: Test case body timed out [300.011s] sys/netinet6/test_ip6_output.py:TestIP6Output::test_output6_nhop -> broken: Test case body timed out [300.008s] sys/netinet6/test_ip6_output.py:TestIP6Output::test_output6_pktinfo[empty] -> broken: Test case body timed out [300.012s] sys/netinet6/test_ip6_output.py:TestIP6Output::test_output6_pktinfo[ifsame] -> broken: Test case body timed out [300.005s] sys/netinet6/test_ip6_output.py:TestIP6Output::test_output6_pktinfo[ipandif] -> broken: Test case body timed out [300.011s] sys/netinet6/test_ip6_output.py:TestIP6Output::test_output6_pktinfo[iponly1] -> broken: Test case body timed out [300.009s] sys/netinet6/test_ip6_output.py:TestIP6Output::test_output6_pktinfo[nolocalip] -> broken: Test case body timed out [300.007s] sys/netinet6/test_ip6_output.py:TestIP6OutputLL::test_output6_linklocal -> broken: Test case body timed out [300.014s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_tcp[gu-no_sav] -> broken: Test case body timed out [300.008s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_tcp[gu-sav] -> broken: Test case body timed out [300.011s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_tcp[ll-no_sav] -> broken: Test case body timed out [300.006s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_tcp[ll-sav] -> broken: Test case body timed out [300.006s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_tcp[lo-no_sav] -> broken: Test case body timed out [300.004s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_tcp[lo-sav] -> broken: Test case body timed out [300.005s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_udp[gu-no_sav] -> broken: Test case body timed out [300.013s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_udp[gu-sav] -> broken: Test case body timed out [300.013s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_udp[ll-no_sav] -> broken: Test case body timed out [300.009s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_udp[ll-sav] -> broken: Test case body timed out [300.007s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_udp[lo-no_sav] -> broken: Test case body timed out [300.004s] sys/netinet6/test_ip6_output.py:TestIP6OutputLoopback::test_output6_self_udp[lo-sav] -> broken: Test case body timed out [300.013s] sys/netinet6/test_ip6_output.py:TestIP6OutputMulticast::test_output6_multicast[ff02] -> broken: Test case body timed out [300.004s] sys/netinet6/test_ip6_output.py:TestIP6OutputMulticast::test_output6_multicast[ff05] -> broken: Test case body timed out [300.007s] sys/netinet6/test_ip6_output.py:TestIP6OutputMulticast::test_output6_multicast[ff08] -> broken: Test case body timed out [300.011s] sys/netinet6/test_ip6_output.py:TestIP6OutputMulticast::test_output6_multicast[ff0e] -> broken: Test case body timed out [300.005s] sys/netinet6/test_ip6_output.py:TestIP6OutputNhopLL::test_output6_nhop_linklocal -> 2026-02-07T02:35:54.660130+00:00 - auditd 1540 - - auditd_wait_for_events: SIGTERM 2026-02-07T02:35:54.662254+00:00 - auditd 1540 - - Auditing disabled 2026-02-07T02:35:54.666434+00:00 - auditd 1540 - - renamed /var/audit/20260206203859.not_terminated to /var/audit/20260206203859.20260207023554 2026-02-07T02:35:54.668599+00:00 - auditd 1540 - - Finished 2026-02-07T02:36:14.680351+00:00 - init 1 - - some processes would not die; ps axl advised Waiting (max 60 seconds) for system process `vnlru' to stop... done Waiting (max 60 seconds) for system process `syncer' to stop... Syncing disks, vnodes remaining... 0 0 done All buffers synced. + rc=0 + echo 'bhyve return code = 0' bhyve return code = 0 + sudo /usr/sbin/bhyvectl '--vm=testvm-main-amd64-27835' --destroy + sh -ex freebsd-ci/scripts/test/extract-meta.sh + METAOUTDIR=meta-out + rm -fr meta-out + mkdir meta-out + tar xvf meta.tar -C meta-out x ./ x ./run-kyua.sh x ./disable-notyet-tests.sh x ./run.sh x ./disable-zfs-tests.sh x ./auto-shutdown x ./disable-dtrace-tests.sh + rm -f test-report.txt test-report.xml + mv 'meta-out/test-report.*' . mv: rename meta-out/test-report.* to ./test-report.*: No such file or directory + report=test-report.xml + [ -e freebsd-ci/jobs/FreeBSD-main-amd64-test/xfail-list -a -e test-report.xml ] + rm -f disk-cam + jot 5 + rm -f disk1 + rm -f disk2 + rm -f disk3 + rm -f disk4 + rm -f disk5 + rm -f disk-test.img [PostBuildScript] - [INFO] Executing post build scripts. [FreeBSD-main-amd64-test] $ /bin/sh -xe /tmp/jenkins2391395512100216540.sh + ./freebsd-ci/artifact/post-link.py Post link: {'job_name': 'FreeBSD-main-amd64-test', 'commit': '9352d2f6dd55afcf0ac24d2806da7c6febf19589', 'branch': 'main', 'target': 'amd64', 'target_arch': 'amd64', 'link_type': 'latest_tested'} "Link created: main/latest_tested/amd64/amd64 -> ../../9352d2f6dd55afcf0ac24d2806da7c6febf19589/amd64/amd64\n" Recording test results ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? Checking for post-build Performing post-build step Checking if email needs to be generated Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Sending mail from default account using System Admin e-mail addresshome | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?427961256.1739.1770431844673>
