Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 21 May 1999 20:04:47 -0700 (PDT)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Cy Schubert - ITSD Open Systems Group <Cy.Schubert@uumail.gov.bc.ca>
Cc:        dg@root.com, Cliff Skolnick <cliff@steam.com>, Mike Tancsa <mike@sentex.net>, freebsd-stable@FreeBSD.ORG, luoqi@FreeBSD.ORG, Matthew Dillon <dillon@apollo.backplane.com>
Subject:   Re: vm_fault deadlock and PR 8416 ... NOT fixed! 
Message-ID:  <199905220304.UAA69597@apollo.backplane.com>
References:   <199905121459.HAA27314@cwsys.cwsent.com>

next in thread | previous in thread | raw e-mail | index | archive | help
:Would plan B risk corruption of any data?  Could itself be the likely 
:cause of any potential panics?
:
:Assuming plan B has no major risks, this might be a temporary 
:workaround until we can wrap our minds around this one.  It's just a 
:rework of Luoqi's patch, just in case we want to try plan B again.
:
:--- kern_lock.c.orig	Tue May 11 08:34:52 1999
:+++ kern_lock.c	Wed May 12 05:38:52 1999
:@@ -215,7 +215,9 @@
: 		 * lock itself ).
: 		 */
: 		if (lkp->lk_lockholder != pid) {
:-			if (p->p_flag & P_DEADLKTREAT) {
:+			if ((p->p_flag & P_DEADLKTREAT) ||
:+			    ((lkp->lk_flags & LK_SHARE_NONZERO) != 0 &&
:+			    (flags & LK_CANRECURSE) != 0) {
: 				error = acquire(
: 					    lkp,
: 					    extflags,
:
:If this workaround doesn't work, then setting error = 0 and allowing 
:the code to fall through to the subsequent sharelock may be our only 
:choice for now.
:
:The other point I wish to make for all on this list is that Matt's 
:patch fixes a read()/mmap() deadlock.  It doesn't fix a write()/mmap() 
:deadlock.
:
:
:Regards,                       Phone:  (250)387-8437
:Cy Schubert                      Fax:  (250)387-5766

    It's an interesting workaround, but a bit too complex.  That is, the
    locking is becoming a too complex.  We are going to screw ourselves if
    we keep patching it.  I don't even like *my* patch to fix read/mmap 
    deadlocks.

    I see a relatively simple solution.  Complex to implement, but simple
    in concept.  Actually two potential solutions.

    Solution #1:  A combination uio locking call.

	uiolock(vnode, vnodelocktype, uio, uiolocktype)

	The routine would lock *ALL* vnodes associated with the uio plus the
	passed vnode.  If the passed vnode is also present in the uio range
	then the most stringent lock type will be used for that vnode.

    Solution #2:  Integrate locks in the uio operation.

	The uio is broken up into segments containing unique vnodes.  So,
	for example, a uio referencing a memory range that spans more then
	one mmap() would be broken up.

	The source/destination vnode is relocked with the underlying uio
	vnode for each segment.  The locking order is sorted by pointer
	address.

	Each vnode is given a second 'overall' lock to guarentee read/write
	atomicy.  We will design this second lock to eventually allow us to
	implement (offset,size) ranges to allow concurrent reads/write of
	non-overlapping areas.

    I like solution #2.  Solution #1 may be too messy.  The uio is the
    interface between the VM system and the VFS system, so it makes sense
    to integrate the locking within it.

					    -Matt



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199905220304.UAA69597>