Date: Thu, 14 Dec 2000 14:43:46 -0800 (PST) From: Matt Dillon <dillon@earth.backplane.com> To: Alfred Perlstein <bright@wintelcom.net> Cc: Kirk McKusick <mckusick@FreeBSD.ORG>, cvs-committers@FreeBSD.ORG, cvs-all@FreeBSD.ORG Subject: Re: cvs commit: src/sys/ufs/ffs ffs_inode.c ffs_softdep.c src/sys/ufs/ufs ufs_extern.h ufs_lookup.c Message-ID: <200012142243.eBEMhkO98103@earth.backplane.com> References: <200012130830.eBD8UbJ17674@freefall.freebsd.org> <20001213005420.Z16205@fw.wintelcom.net>
next in thread | previous in thread | raw e-mail | index | archive | help
:.. :> Now when the worklist gets too large, other processes can safely :> help out by picking off those work requests that can be handled :> without locking a vnode, leaving only the small number of :> requests requiring a vnode lock for the syncer process. With :> this change, it appears possible to keep even the nastiest :> workloads under control. : :Possible, but this doesn't seem like it will always do the :job. Why not implement a max amount of requests and block :processes who want to add until the threshhold goes below :some mark? : :-- :-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org] I don't think we can afford to block, especially not in the middle of the softupdates code. You don't know what locks somewhat might be holding or how critical the calling process is to the health of the system. I just got through *removing* blockages from MALLOC's inside softupdates to fix low-memory deadlock problems. I *like* the idea of synchronously draining the queues, which does not involve deadlockable blocking (more to the point, involves only blocking for a deterministic period of time.. waiting on the I/O). It is precisely this sort of idea that I use to good effect in dealing with low-memory situations. -Matt To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe cvs-all" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200012142243.eBEMhkO98103>