Date: Thu, 7 Jun 2007 10:59:40 -0400 From: John Baldwin <jhb@freebsd.org> To: Bruce Evans <brde@optusnet.com.au> Cc: src-committers@freebsd.org, Kip Macy <kip.macy@gmail.com>, cvs-all@freebsd.org, Attilio Rao <attilio@freebsd.org>, cvs-src@freebsd.org, Kostik Belousov <kostikbel@gmail.com>, Jeff Roberson <jroberson@chesapeake.net> Subject: Re: cvs commit: src/sys/kern kern_mutex.c Message-ID: <200706071059.41466.jhb@freebsd.org> In-Reply-To: <20070607180257.P7767@besplex.bde.org> References: <200706051420.l55EKEih018925@repoman.freebsd.org> <20070607163724.M7517@besplex.bde.org> <20070607180257.P7767@besplex.bde.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thursday 07 June 2007 04:15:13 am Bruce Evans wrote: > On Thu, 7 Jun 2007, Bruce Evans wrote: > > > (Cold client): > > 834.39 real 1300.21 user 192.19 sys > > 1323006 voluntary context switches > > 1526348 involuntary context switches > > ... > > This is with 4BSD, no PREEMPTION, and pagezero disabled. With the > > ... > > The next run will have pagezero resetting its priority when this priority > > gets clobbered. > > That gave only mainly more voluntary context switches (13.5+ million instead > of the best observed value of 1.3+ million or the value of 2.9+ million > without priority resetting. It reduced the pagezero time from 30 seconds to > 24. It didn't change the real time significantly. Hmm, one problem with the non-preemption page zero is that it doesn't yield the lock when it voluntarily yields. I wonder if something like this patch would help things for the non-preemption case: Index: vm_zeroidle.c =================================================================== RCS file: /usr/cvs/src/sys/vm/vm_zeroidle.c,v retrieving revision 1.45 diff -u -r1.45 vm_zeroidle.c --- vm_zeroidle.c 18 May 2007 07:10:50 -0000 1.45 +++ vm_zeroidle.c 7 Jun 2007 14:56:02 -0000 @@ -147,8 +147,10 @@ #ifndef PREEMPTION if (sched_runnable()) { mtx_lock_spin(&sched_lock); + mtx_unlock(&vm_page_queue_free_mtx); mi_switch(SW_VOL, NULL); mtx_unlock_spin(&sched_lock); + mtx_lock(&vm_page_queue_free_mtx); } #endif } else { We could simulate this behavior some by using a critical section to control when preemptions happen so that we wait until we drop the lock perhaps to allow preemptions. Something like this: > Index: vm_zeroidle.c =================================================================== RCS file: /usr/cvs/src/sys/vm/vm_zeroidle.c,v retrieving revision 1.45 diff -u -r1.45 vm_zeroidle.c --- vm_zeroidle.c 18 May 2007 07:10:50 -0000 1.45 +++ vm_zeroidle.c 7 Jun 2007 14:58:39 -0000 @@ -110,8 +110,10 @@ if (m != NULL && (m->flags & PG_ZERO) == 0) { vm_pageq_remove_nowakeup(m); mtx_unlock(&vm_page_queue_free_mtx); + critical_exit(); pmap_zero_page_idle(m); mtx_lock(&vm_page_queue_free_mtx); + critical_enter(); m->flags |= PG_ZERO; vm_pageq_enqueue(PQ_FREE + m->pc, m); ++vm_page_zero_count; @@ -141,20 +143,25 @@ idlezero_enable = idlezero_enable_default; mtx_lock(&vm_page_queue_free_mtx); + critical_enter(); for (;;) { if (vm_page_zero_check()) { vm_page_zero_idle(); #ifndef PREEMPTION if (sched_runnable()) { mtx_lock_spin(&sched_lock); + mtx_unlock(&vm_page_queue_free_mtx); mi_switch(SW_VOL, NULL); mtx_unlock_spin(&sched_lock); + mtx_lock(&vm_page_queue_free_mtx); } #endif } else { + critical_exit(); wakeup_needed = TRUE; msleep(&zero_state, &vm_page_queue_free_mtx, 0, "pgzero", hz * 300); + critical_enter(); } } } -- John Baldwin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200706071059.41466.jhb>