Date: Tue, 27 Jun 2006 01:27:25 -0700 From: "Kip Macy" <kip.macy@gmail.com> To: "Robert Watson" <rwatson@freebsd.org> Cc: Perforce Change Reviews <perforce@freebsd.org>, Kip Macy <kmacy@freebsd.org>, John Baldwin <jhb@freebsd.org> Subject: Re: PERFORCE change 100089 for review Message-ID: <b1fa29170606270127i50cddc78i4271158f64b2e72a@mail.gmail.com> In-Reply-To: <b1fa29170606261955r252e15a5l4ffc13d061dbef02@mail.gmail.com> References: <200606262054.k5QKsDq7022302@repoman.freebsd.org> <200606261759.41541.jhb@freebsd.org> <20060627001336.T79454@fledge.watson.org> <b1fa29170606261955r252e15a5l4ffc13d061dbef02@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
actually it shows up - assuming I haven't missed any cases its pretty uncontended: t1# sort -nrk 3 foo33 | grep -n rwlock 111: 1 26 553 38 0 14 1 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:311 (unp_global_rwlock) 126: 2 42 349 38 1 9 0 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:482 (unp_global_rwlock) 130: 7 184 282 38 4 7 2 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:836 (unp_global_rwlock) 135: 2 33 240 19 1 12 0 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:1196 (unp_global_rwlock) 137: 0 6 234 19 0 12 0 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:446 (unp_global_rwlock) 155: 21 305 120 19 16 6 1 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:496 (unp_global_rwlock) 161: 1 14 89 19 0 4 0 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:1101 (unp_global_rwlock) 428: 61 808 1 19 42 0 2 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/uipc_usrreq.c:1125 (unp_global_rwlock) On 6/26/06, Kip Macy <kip.macy@gmail.com> wrote: > I've mapped your uipc_usrreq.c into my tree and have seen a measurable > boost. I actually see no contention on it. If I go into overload (16 > threads) I see the following: > > 65 13580255 555960120 4332486 3 128 22050892 > 4323043 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/kern_synch.c:217 > (lockbuilder mtxpool) > 13 24053476 160697931 92708398 0 1 30726211 > 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/kern_switch.c:522 > (runq lock) > 371 63389470 27487168 936871 67 29 5918460 > 640938 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/kern_lock.c:163 > (lockbuilder mtxpool) > 39 36405448 10970117 4748316 7 2 4132590 > 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/kern_switch.c:221 > (runq lock) > 361 85861725 10866103 5699832 15 1 3813907 > 0 /flatstor/shared/p4/sun4v/work_sleepq/src/sys/kern/subr_sleepqueue.c:223 > (sleepq chain) > > lockmgr is my biggest problem now. > > On 6/26/06, Robert Watson <rwatson@freebsd.org> wrote: > > On Mon, 26 Jun 2006, John Baldwin wrote: > > > > > On Monday 26 June 2006 16:54, Kip Macy wrote: > > >> http://perforce.freebsd.org/chv.cgi?CH=100089 > > >> > > >> Change 100089 by kmacy@kmacy_storage:sun4v_work_sleepq on 2006/06/26 > > > 20:53:51 > > >> > > >> add profiling for rwlocks > > >> not convinced of correctness as there don't appear to be any contended > > > rwlocks on my workloads > > > > > > Few things use them currently. I have a patch to make the name cache use > > > them if you want it. > > > > You may already have seen this, but I have a UNIX domain socket re-locking in > > //depot/user/rwatson/proto/src/sys/kern/uipc_usrreq.c that uses rwlocks and > > finer-grained mutexes, among other things. Ideally this can generate some > > contention (although perhaps not too much). > > > > Robert N M Watson > > Computer Laboratory > > University of Cambridge > > >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b1fa29170606270127i50cddc78i4271158f64b2e72a>