Date: Tue, 18 May 2004 12:57:34 -0700 (PDT) From: Julian Elischer <julian@elischer.org> To: Thomas Moestl <tmm@FreeBSD.org> Cc: sparc64@freebsd.org Subject: Re: sparc64 kernel code question.. Message-ID: <Pine.BSF.4.21.0405181248180.41838-100000@InterJet.elischer.org> In-Reply-To: <20040510010301.GA6829@timesink.dyndns.org>
next in thread | previous in thread | raw e-mail | index | archive | help
looking at this code again and the description as to why it is there.. On Mon, 10 May 2004, Thomas Moestl wrote: > On Sun, 2004/05/09 at 15:44:40 -0700, Julian Elischer wrote: > > in vm_machdep.c the sparc64 code has > > void > > cpu_sched_exit(struct thread *td) > > { > > struct vmspace *vm; > > struct pcpu *pc; > > struct proc *p; > > > > mtx_assert(&sched_lock, MA_OWNED); > > > > p = td->td_proc; > > vm = p->p_vmspace; > > if (vm->vm_refcnt > 1) > > return; > > SLIST_FOREACH(pc, &cpuhead, pc_allcpu) { > > if (pc->pc_vmspace == vm) { > > vm->vm_pmap.pm_active &= ~pc->pc_cpumask; > > vm->vm_pmap.pm_context[pc->pc_cpuid] = -1; > > pc->pc_vmspace = NULL; > > } > > } > > } > > > > > > > > This is thw only architecture that has this.. > > What does it do? And what does it have to do with the scheduler? to answer question 2,, nothing.. in my sources I renamed it to cpu_exit2() > > To quote from the commit log: > date: 2002/06/24 15:48:01; author: jake; state: Exp; lines: +1 -0 > Add an MD callout like cpu_exit, but which is called after sched_lock is > obtained, when all other scheduling activity is suspended. This is needed > on sparc64 to deactivate the vmspace of the exiting process on all cpus. > Otherwise if another unrelated process gets the exact same vmspace structure > allocated to it (same address), its address space will not be activated > properly. This seems to fix some spontaneous signal 11 problems with smp > on sparc64. > > To elaborate on that a bit: > The sparc64 cpu_switch() has an optimization to avoid needlessly > invalidating TLB entries: when we switch to a kernel thread, we need > not switch VM contexts at all, and do with whatever vmspace was active > before. When we switch to a thread that has the vmspace that is > already in use currently, we need not load a new context register > value (which is analog to flushing the TLB). > > We identify vmspaces by their pointers for this purpose, so there can > be a race between freeing the struct vmspace by wait()ing (on another > processor) and switching to another thread (on the first > processor). Specifically, the first processor could be switching to a > newly created thread that has the same struct vmspace that was just > freed, so we would mistakenly assume that we need not bother loading > the context register, and continue using outdated TLB entries. > > To prevent this, cpu_sched_exit() zeros the respective PCPU variables > holding the active vmspace if it is going to be destroyed, so it will > never match any other during the next cpu_switch(). I'm not convinced that this is valid.. consider.. When you cycle through the processors above and remove the pointers to the vmspace, then proceed to destroy this vmspace, there is nothing done to make sure that the other procerssors are actually not USING the page tables etc. associated with the vmspace. If we reclame the page tables.. surely there is a danger that another cpu by still be using them? I think that even "temporary" users of vmspaces, such as kernel threads, should have reference counts and be capable of freeing the vmspace should it go to 0 when they complete using it. > > - Thomas > > -- > Thomas Moestl <t.moestl@tu-bs.de> http://www.tu-bs.de/~y0015675/ > <tmm@FreeBSD.org> http://people.FreeBSD.org/~tmm/ > "I try to make everyone's day a little more surreal." > -- Calvin and Hobbes > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0405181248180.41838-100000>