Date: Sat, 20 Feb 1999 15:09:35 -0800 (PST) From: Matthew Jacob <mjacob@feral.com> To: Doug Rabson <dfr@nlsystems.com> Cc: freebsd-hackers@freebsd.org Subject: Re: Panic in FFS/4.0 as of yesterday - update Message-ID: <Pine.LNX.4.04.9902201504250.31494-100000@feral-gw> In-Reply-To: <Pine.BSF.4.05.9902202300460.82049-100000@herring.nlsystems.com>
next in thread | previous in thread | raw e-mail | index | archive | help
> > > > I'm not entirely sure that the root inode lock is the whole problem. I > > think another problem may be just growing very large delayed write queues- > > there doesn't seem to be any way any more to keep a single process from > > blowing the whole buffer cache- but I'd be the first to admit that my > > knowledge of this area of unix internals is 7-10 years old. > > Certainly the root inode lock is the symptom. Even if it was fixed (by > rewriting lookup), a single process can still generate unreasonable > amounts of i/o. With a merged vmio system, this can cause huge latencies > as we have seen. When I was at Kubota I did a bit of work in this area- essentially breaking the single bdelwri queue in 'generations' so that you could know when when pass had completed prior to allowing another set of passes (often dirtying the same buffers) to run- it was originally motivated to try and find a way to ensure that there was a known point at which I/O for delayed writes was complete. It strikes me that the lengths of such I/O (in a multidimensional measure of actual bytes, number of requests (before AND after clustering- clustering isn't always a win- particularly with 4th generation I/O hoses) and the number of different threads pushing I/O) becomes candidates for I/O scheduling at the VFS layer. Is anyone thinking along these lines? Sounds like a really good candidate for kernel threads... -matt To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.LNX.4.04.9902201504250.31494-100000>