From owner-freebsd-sparc64@FreeBSD.ORG Wed May 19 12:30:25 2004 Return-Path: Delivered-To: freebsd-sparc64@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3D46816A4CE; Wed, 19 May 2004 12:30:25 -0700 (PDT) Received: from sccrmhc13.comcast.net (sccrmhc13.comcast.net [204.127.202.64]) by mx1.FreeBSD.org (Postfix) with ESMTP id BDC7B43D39; Wed, 19 May 2004 12:30:24 -0700 (PDT) (envelope-from julian@elischer.org) Received: from interjet.elischer.org ([24.7.73.28]) by comcast.net (sccrmhc13) with ESMTP id <2004051919300801600aqt3ie>; Wed, 19 May 2004 19:30:08 +0000 Received: from localhost (localhost.elischer.org [127.0.0.1]) by InterJet.elischer.org (8.9.1a/8.9.1) with ESMTP id MAA58251; Wed, 19 May 2004 12:30:04 -0700 (PDT) Date: Wed, 19 May 2004 12:30:01 -0700 (PDT) From: Julian Elischer To: Thomas Moestl In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: FreeBSD current users cc: sparc64@freebsd.org Subject: Re: sparc64 question.. Anyone out there? X-BeenThere: freebsd-sparc64@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Porting FreeBSD to the Sparc List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 May 2004 19:30:25 -0000 Is there anyone out there who really understands this? On Tue, 18 May 2004, Julian Elischer wrote: > > looking at this code again and the description as to why it is there.. > > > On Mon, 10 May 2004, Thomas Moestl wrote: > > > On Sun, 2004/05/09 at 15:44:40 -0700, Julian Elischer wrote: > > > in vm_machdep.c the sparc64 code has > > > void > > > cpu_sched_exit(struct thread *td) > > > { > > > struct vmspace *vm; > > > struct pcpu *pc; > > > struct proc *p; > > > > > > mtx_assert(&sched_lock, MA_OWNED); > > > > > > p = td->td_proc; > > > vm = p->p_vmspace; > > > if (vm->vm_refcnt > 1) > > > return; > > > SLIST_FOREACH(pc, &cpuhead, pc_allcpu) { > > > if (pc->pc_vmspace == vm) { > > > vm->vm_pmap.pm_active &= ~pc->pc_cpumask; > > > vm->vm_pmap.pm_context[pc->pc_cpuid] = -1; > > > pc->pc_vmspace = NULL; > > > } > > > } > > > } > > > > > > > > > > > > This is thw only architecture that has this.. > > > What does it do? And what does it have to do with the scheduler? > > to answer question 2,, > nothing.. in my sources I renamed it to cpu_exit2() > > > > > > To quote from the commit log: > > date: 2002/06/24 15:48:01; author: jake; state: Exp; lines: +1 -0 > > Add an MD callout like cpu_exit, but which is called after sched_lock is > > obtained, when all other scheduling activity is suspended. This is needed > > on sparc64 to deactivate the vmspace of the exiting process on all cpus. > > Otherwise if another unrelated process gets the exact same vmspace structure > > allocated to it (same address), its address space will not be activated > > properly. This seems to fix some spontaneous signal 11 problems with smp > > on sparc64. > > > > To elaborate on that a bit: > > The sparc64 cpu_switch() has an optimization to avoid needlessly > > invalidating TLB entries: when we switch to a kernel thread, we need > > not switch VM contexts at all, and do with whatever vmspace was active > > before. When we switch to a thread that has the vmspace that is > > already in use currently, we need not load a new context register > > value (which is analog to flushing the TLB). > > > > We identify vmspaces by their pointers for this purpose, so there can > > be a race between freeing the struct vmspace by wait()ing (on another > > processor) and switching to another thread (on the first > > processor). Specifically, the first processor could be switching to a > > newly created thread that has the same struct vmspace that was just > > freed, so we would mistakenly assume that we need not bother loading > > the context register, and continue using outdated TLB entries. > > > > To prevent this, cpu_sched_exit() zeros the respective PCPU variables > > holding the active vmspace if it is going to be destroyed, so it will > > never match any other during the next cpu_switch(). > > I'm not convinced that this is valid.. > consider.. > When you cycle through the processors above and remove the pointers to > the vmspace, then proceed to destroy this vmspace, there is nothing done > to make sure that the other procerssors are actually > not USING the page tables etc. associated with the vmspace. > > If we reclame the page tables.. surely there is a danger that another > cpu by still be using them? > > I think that even "temporary" users of vmspaces, such as kernel threads, > should have reference counts and be capable of freeing the vmspace > should it go to 0 when they complete using it. > > > > > > > > - Thomas > > > > -- > > Thomas Moestl http://www.tu-bs.de/~y0015675/ > > http://people.FreeBSD.org/~tmm/ > > "I try to make everyone's day a little more surreal." > > -- Calvin and Hobbes > > _______________________________________________ > > freebsd-current@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > > > > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" >