Date: Sat, 18 May 2019 17:45:47 +0300 From: Konstantin Belousov <kostikbel@gmail.com> To: Mark Johnston <markj@freebsd.org> Cc: Rebecca Cran <rebecca@bluestop.org>, freebsd-current@freebsd.org Subject: Re: panic booting with if_tap_load="YES" in loader.conf Message-ID: <20190518144547.GE2748@kib.kiev.ua> In-Reply-To: <20190518143815.GD2748@kib.kiev.ua> References: <105ad083-ea95-21a7-35be-de01e32578c4@bluestop.org> <20190518053328.GA7370@raichu> <20190518085546.GY2748@kib.kiev.ua> <20190518142436.GB7370@raichu> <20190518143815.GD2748@kib.kiev.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, May 18, 2019 at 05:38:15PM +0300, Konstantin Belousov wrote: > On Sat, May 18, 2019 at 10:24:36AM -0400, Mark Johnston wrote: > > On Sat, May 18, 2019 at 11:55:46AM +0300, Konstantin Belousov wrote: > > > On Sat, May 18, 2019 at 01:33:28AM -0400, Mark Johnston wrote: > > > > On Fri, May 17, 2019 at 10:18:57PM -0600, Rebecca Cran wrote: > > > > > I just updated from r346856 to r347950 and ran into a new panic, caused > > > > > by having if_tap_load="YES" in /boot/loader.conf - because it's already > > > > > built-in to the kernel: > > > > > > > > I think this patch should fix the panic, but I only compile-tested it. > > > > I considered having the logic live in preload_delete_name() instead, but > > > > the boot-time ucode code must still defer the deletion somewhat. > > > > > > Try this instead. I will revert r347931 after this landed, or could keep > > > it alone. > > > > I have no strong feeling either way. > > > > > diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c > > > index 1cf09dc5cb7..03fe8a5d096 100644 > > > --- a/sys/amd64/amd64/machdep.c > > > +++ b/sys/amd64/amd64/machdep.c > > > @@ -1616,6 +1616,13 @@ hammer_time(u_int64_t modulep, u_int64_t physfree) > > > bzero((void *)thread0.td_kstack, kstack0_sz); > > > physfree += kstack0_sz; > > > > > > + /* > > > + * Initialize enough of thread0 for delayed invalidation to > > > + * work very early. Rely on thread0.td_base_pri > > > + * zero-initialization, it is reset to PVM at proc0_init(). > > > + */ > > > + pmap_thread_init_invl_gen(&thread0); > > > + > > > > I think pmap_thread_init_invl_gen() also needs to initialize > > invl_gen->saved_pri to 0. > It is zero-initialized, same as td_base_pri. The call to > pmap_thread_init_invl_gen() is needed because _INVALID bit must be set, > which is cleared by default. I now think that you mean something else, that invl_gen.saved_pri must be set on each entry to invl_start_u(). Otherwise invl_finish_u() might act on the priority from the previous DI block. diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c index 1cf09dc5cb7..03fe8a5d096 100644 --- a/sys/amd64/amd64/machdep.c +++ b/sys/amd64/amd64/machdep.c @@ -1616,6 +1616,13 @@ hammer_time(u_int64_t modulep, u_int64_t physfree) bzero((void *)thread0.td_kstack, kstack0_sz); physfree += kstack0_sz; + /* + * Initialize enough of thread0 for delayed invalidation to + * work very early. Rely on thread0.td_base_pri + * zero-initialization, it is reset to PVM at proc0_init(). + */ + pmap_thread_init_invl_gen(&thread0); + /* * make gdt memory segments */ diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c index 7997a9f65dc..89ba9da19d8 100644 --- a/sys/amd64/amd64/pmap.c +++ b/sys/amd64/amd64/pmap.c @@ -700,16 +700,17 @@ pmap_delayed_invl_start_u(void) invl_gen = &td->td_md.md_invl_gen; PMAP_ASSERT_NOT_IN_DI(); lock_delay_arg_init(&lda, &di_delay); - thread_lock(td); + invl_gen->saved_pri = 0; pri = td->td_base_pri; - if (pri < PVM) { - invl_gen->saved_pri = 0; - } else { - invl_gen->saved_pri = pri; - sched_prio(td, PVM); + if (pri > PVM) { + thread_lock(td); + pri = td->td_base_pri; + if (pri > PVM) { + invl_gen->saved_pri = pri; + sched_prio(td, PVM); + } + thread_unlock(td); } - thread_unlock(td); - again: PV_STAT(i = 0); for (p = &pmap_invl_gen_head;; p = prev.next) {
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20190518144547.GE2748>