Date: Fri, 08 Jun 2007 17:06:51 +0200 From: Attilio Rao <attilio@FreeBSD.org> To: Bruce Evans <brde@optusnet.com.au> Cc: src-committers@freebsd.org, John Baldwin <jhb@freebsd.org>, Kip Macy <kip.macy@gmail.com>, cvs-all@freebsd.org, cvs-src@freebsd.org, Kostik Belousov <kostikbel@gmail.com>, Jeff Roberson <jroberson@chesapeake.net> Subject: Re: cvs commit: src/sys/kern kern_mutex.c Message-ID: <4669708B.4090509@FreeBSD.org> In-Reply-To: <20070609002752.M13082@besplex.bde.org> References: <200706051420.l55EKEih018925@repoman.freebsd.org> <20070607163724.M7517@besplex.bde.org> <20070607180257.P7767@besplex.bde.org> <200706071059.41466.jhb@freebsd.org> <20070609002752.M13082@besplex.bde.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Bruce Evans wrote: > The sched_locks are now of course thread_locks. I put the vm unlock > before the thread lock since the above order seems to risk a LOR. That > may have been a mistake -- we would prefer not to be switched after > deciding to do it ourself. No, there can't be this LOR since thread_lock is a spinmutex while vm_page_queue_mtx is a sleepable mutex, so for our well known-rules about locking you cannot have the opposite situation. And if you don't follow John's pattern I think you get a race too since there is a breaking point in the protected path. >>> Index: vm_zeroidle.c >> =================================================================== >> RCS file: /usr/cvs/src/sys/vm/vm_zeroidle.c,v >> retrieving revision 1.45 >> diff -u -r1.45 vm_zeroidle.c >> --- vm_zeroidle.c 18 May 2007 07:10:50 -0000 1.45 >> +++ vm_zeroidle.c 7 Jun 2007 14:58:39 -0000 >> @@ -110,8 +110,10 @@ >> if (m != NULL && (m->flags & PG_ZERO) == 0) { >> vm_pageq_remove_nowakeup(m); >> mtx_unlock(&vm_page_queue_free_mtx); >> + critical_exit(); >> pmap_zero_page_idle(m); >> mtx_lock(&vm_page_queue_free_mtx); >> + critical_enter(); >> m->flags |= PG_ZERO; >> vm_pageq_enqueue(PQ_FREE + m->pc, m); >> ++vm_page_zero_count; > > Next I will try this. I put the critical_exit() before the vm unlock. > mtx_unlock() should be allowed to switch if it wants. However, we > would prefer to keep context switches disabled in the above -- just drop > the lock so that other CPUs can proceed. This is good since mtx_unlock will force a preemption point here. Attilio
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4669708B.4090509>