From owner-freebsd-current@freebsd.org Wed Jul 22 06:05:24 2020 Return-Path: Delivered-To: freebsd-current@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 37C6A372876; Wed, 22 Jul 2020 06:05:24 +0000 (UTC) (envelope-from jmg@gold.funkthat.com) Received: from gold.funkthat.com (gate2.funkthat.com [208.87.223.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "gate2.funkthat.com", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4BBQ1G4LVkz3TnC; Wed, 22 Jul 2020 06:05:21 +0000 (UTC) (envelope-from jmg@gold.funkthat.com) Received: from gold.funkthat.com (localhost [127.0.0.1]) by gold.funkthat.com (8.15.2/8.15.2) with ESMTPS id 06M65FTI041270 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 21 Jul 2020 23:05:15 -0700 (PDT) (envelope-from jmg@gold.funkthat.com) Received: (from jmg@localhost) by gold.funkthat.com (8.15.2/8.15.2/Submit) id 06M65EBK041269; Tue, 21 Jul 2020 23:05:14 -0700 (PDT) (envelope-from jmg) Date: Tue, 21 Jul 2020 23:05:14 -0700 From: John-Mark Gurney To: Peter Libassi Cc: Marko Zec , freebsd-net@freebsd.org, freebsd-current@freebsd.org Subject: Re: somewhat reproducable vimage panic Message-ID: <20200722060514.GF4213@funkthat.com> Mail-Followup-To: Peter Libassi , Marko Zec , freebsd-net@freebsd.org, freebsd-current@freebsd.org References: <20200721091654.GC4213@funkthat.com> <20200721113153.42d83119@x23> <20200721202323.GE4213@funkthat.com> <38F5A3A6-B578-4BA4-8F69-C248163CB6E0@libassi.se> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <38F5A3A6-B578-4BA4-8F69-C248163CB6E0@libassi.se> X-Operating-System: FreeBSD 11.3-STABLE amd64 X-PGP-Fingerprint: D87A 235F FB71 1F3F 55B7 ED9B D5FF 5A51 C0AC 3D65 X-Files: The truth is out there X-URL: https://www.funkthat.com/ X-Resume: https://www.funkthat.com/~jmg/resume.html X-TipJar: bitcoin:13Qmb6AeTgQecazTWph4XasEsP7nGRbAPE X-to-the-FBI-CIA-and-NSA: HI! HOW YA DOIN? can i haz chizburger? User-Agent: Mutt/1.6.1 (2016-04-27) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (gold.funkthat.com [127.0.0.1]); Tue, 21 Jul 2020 23:05:15 -0700 (PDT) X-Rspamd-Queue-Id: 4BBQ1G4LVkz3TnC X-Spamd-Bar: + Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=none (mx1.freebsd.org: domain of jmg@gold.funkthat.com has no SPF policy when checking 208.87.223.18) smtp.mailfrom=jmg@gold.funkthat.com X-Spamd-Result: default: False [1.96 / 15.00]; RCVD_TLS_ALL(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; NEURAL_HAM_LONG(-0.19)[-0.192]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[funkthat.com]; AUTH_NA(1.00)[]; NEURAL_SPAM_MEDIUM(0.06)[0.056]; NEURAL_SPAM_SHORT(0.90)[0.901]; TO_MATCH_ENVRCPT_SOME(0.00)[]; R_SPF_NA(0.00)[no SPF record]; FORGED_SENDER(0.30)[jmg@funkthat.com,jmg@gold.funkthat.com]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; MID_RHS_MATCH_FROM(0.00)[]; FROM_NEQ_ENVFROM(0.00)[jmg@funkthat.com,jmg@gold.funkthat.com]; ASN(0.00)[asn:32354, ipnet:208.87.216.0/21, country:US] X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Jul 2020 06:05:24 -0000 Peter Libassi wrote this message on Wed, Jul 22, 2020 at 06:54 +0200: > Is this related to > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234985 and https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238326 Definitely not 234985.. I'm using ue interfaces, and so they don't get destroyed while the jail is going away... I don't think it's 238326 either. This is 100% reliable and it's in the IP multicast code.. It looks like in_multi isn't holding an interface or address lock waiting for things to free up... > > 21 juli 2020 kl. 22:23 skrev John-Mark Gurney : > > > > Marko Zec wrote this message on Tue, Jul 21, 2020 at 11:31 +0200: > >> On Tue, 21 Jul 2020 02:16:55 -0700 > >> John-Mark Gurney wrote: > >> > >>> I'm running: > >>> FreeBSD test 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r362596: Thu Jun 25 > >>> 05:02:51 UTC 2020 > >>> root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC > >>> amd64 > >>> > >>> and I'm working on improve the if_ure driver. I've put together a > >>> little script that I've attached that I'm using to test the driver.. > >>> It puts a couple ue interfaces each into their own jail, configures > >>> them, and tries to pass traffic. This assumes that the two interfaces > >>> are connected together. > >>> > >>> Pretty regularly when destroying the jails, I get the following > >>> panic: CURVNET_SET at /usr/src/sys/netinet/in_mcast.c:626 > >>> inm_release() curvnet=0 vnet=0xfffff80154c82a80 > >> > >> Perhaps the attached patch could help? (disclaimer: not even > >> compile-tested) > > > > The patch compiled, but it just moved the panic earlier than before. > > > > #4 0xffffffff80bc2123 in panic (fmt=) > > at ../../../kern/kern_shutdown.c:839 > > #5 0xffffffff80d61726 in inm_release_task (arg=, > > pending=) at ../../../netinet/in_mcast.c:633 > > #6 0xffffffff80c2166a in taskqueue_run_locked (queue=0xfffff800033cfd00) > > at ../../../kern/subr_taskqueue.c:476 > > #7 0xffffffff80c226e4 in taskqueue_thread_loop (arg=) > > at ../../../kern/subr_taskqueue.c:793 > > > > Now it panics at the location of the new CURVNET_SET and not the > > old one.. > > > > Ok, decided to dump the contents of the vnet, and it looks like > > it's a use after free: > > (kgdb) print/x *(struct vnet *)0xfffff8012a283140 > > $2 = {vnet_le = {le_next = 0xdeadc0dedeadc0de, le_prev = 0xdeadc0dedeadc0de}, vnet_magic_n = 0xdeadc0de, > > vnet_ifcnt = 0xdeadc0de, vnet_sockcnt = 0xdeadc0de, vnet_state = 0xdeadc0de, vnet_data_mem = 0xdeadc0dedeadc0de, > > vnet_data_base = 0xdeadc0dedeadc0de, vnet_shutdown = 0xde} > > > > The patch did seem to make it happen quicker, or maybe I was just more > > lucky this morning... > > > >>> (kgdb) #0 __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55 > >>> #1 doadump (textdump=1) at /usr/src/sys/kern/kern_shutdown.c:394 > >>> #2 0xffffffff80bc6250 in kern_reboot (howto=260) > >>> at /usr/src/sys/kern/kern_shutdown.c:481 > >>> #3 0xffffffff80bc66aa in vpanic (fmt=, ap= >>> out>) at /usr/src/sys/kern/kern_shutdown.c:913 > >>> #4 0xffffffff80bc6403 in panic (fmt=) > >>> at /usr/src/sys/kern/kern_shutdown.c:839 > >>> #5 0xffffffff80d6553b in inm_release (inm=0xfffff80029043700) > >>> at /usr/src/sys/netinet/in_mcast.c:630 > >>> #6 inm_release_task (arg=, pending=) > >>> at /usr/src/sys/netinet/in_mcast.c:312 > >>> #7 0xffffffff80c2521a in taskqueue_run_locked > >>> (queue=0xfffff80003116b00) at /usr/src/sys/kern/subr_taskqueue.c:476 > >>> #8 0xffffffff80c26294 in taskqueue_thread_loop (arg=) > >>> at /usr/src/sys/kern/subr_taskqueue.c:793 > >>> #9 0xffffffff80b830f0 in fork_exit ( > >>> callout=0xffffffff80c26200 , > >>> arg=0xffffffff81cf4f70 , > >>> frame=0xfffffe0049e99b80) at /usr/src/sys/kern/kern_fork.c:1052 > >>> #10 > >>> (kgdb) > >>> > >>> I have the core files so I can get additional information. > >>> > >>> Let me know if you need any additional information. > >>> > >> > > > >> Index: sys/netinet/in_mcast.c > >> =================================================================== > >> --- sys/netinet/in_mcast.c (revision 363386) > >> +++ sys/netinet/in_mcast.c (working copy) > >> @@ -309,8 +309,10 @@ > >> IN_MULTI_LOCK(); > >> SLIST_FOREACH_SAFE(inm, &inm_free_tmp, inm_nrele, tinm) { > >> SLIST_REMOVE_HEAD(&inm_free_tmp, inm_nrele); > >> + CURVNET_SET(inm->inm_ifp->if_vnet); > >> MPASS(inm); > >> inm_release(inm); > >> + CURVNET_RESTORE(); > >> } > >> IN_MULTI_UNLOCK(); > >> } -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."