Date: Sat, 25 Feb 2012 23:03:39 +0200 From: Konstantin Belousov <kostikbel@gmail.com> To: Attilio Rao <attilio@freebsd.org> Cc: arch@freebsd.org, Pawel Jakub Dawidek <pjd@freebsd.org> Subject: Re: Prefaulting for i/o buffers Message-ID: <20120225210339.GM55074@deviant.kiev.zoral.com.ua> In-Reply-To: <CAJ-FndBBKHrpB1MNJTXx8gkFXR2d-O6k5-HJeOAyv2DznpN-QQ@mail.gmail.com> References: <20120203193719.GB3283@deviant.kiev.zoral.com.ua> <CAJ-FndABi21GfcCRTZizCPc_Mnxm1EY271BiXcYt9SD_zXFpXw@mail.gmail.com> <20120225151334.GH1344@garage.freebsd.pl> <CAJ-FndBBKHrpB1MNJTXx8gkFXR2d-O6k5-HJeOAyv2DznpN-QQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
--96Fzkco6h7mx3UO9 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Feb 25, 2012 at 06:45:00PM +0100, Attilio Rao wrote: > Il 25 febbraio 2012 16:13, Pawel Jakub Dawidek <pjd@freebsd.org> ha scrit= to: > > On Sat, Feb 25, 2012 at 01:01:32PM +0000, Attilio Rao wrote: > >> Il 03 febbraio 2012 19:37, Konstantin Belousov <kostikbel@gmail.com> h= a scritto: > >> > FreeBSD I/O infrastructure has well known issue with deadlock caused > >> > by vnode lock order reversal when buffers supplied to read(2) or > >> > write(2) syscalls are backed by mmaped file. > >> > > >> > I previously published the patches to convert i/o path to use VMIO, > >> > based on the Jeff Roberson proposal, see > >> > http://wiki.freebsd.org/VM6. As a side effect, the VM6 fixed the > >> > deadlock. Since that work is very intrusive and did not got any > >> > follow-up, it get stalled. > >> > > >> > Below is very lightweight patch which only goal is to fix deadlock in > >> > the least intrusive way. This is possible after FreeBSD got the > >> > vm_fault_quick_hold_pages(9) and vm_fault_disable_pagefaults(9) KPIs. > >> > http://people.freebsd.org/~kib/misc/vm1.3.patch > >> > >> Hi, > >> I was reviewing: > >> http://people.freebsd.org/~kib/misc/vm1.11.patch > >> > >> and I think it is great. It is simple enough and I don't have further > >> comments on it. Thank you. This spoiled an announce I intended to send this weekend :) > >> > >> However, as a side note, I was thinking if we could get one day at the > >> point to integrate rangelocks into vnodes lockmgr directly. > >> It would be a huge patch, rewrtiting the locking of several members of > >> vnodes likely, but I think it would be worth it in terms of cleaness > >> of the interface and less overhead. Also, it would be interesting to > >> consider merging rangelock implementation in ZFS' one, at some point. > > > > I personal opinion about rangelocks and many other VFS features we > > currently have is that it is good idea in theory, but in practise it > > tends to overcomplicate VFS. > > > > I'm in opinion that we should move as much stuff as we can to individual > > file systems. We try to implement everything in VFS itself in hope that > > this will simplify file systems we have. It then turns out only one file > > system is really using this stuff (most of the time it is UFS) and this > > is PITA for all the other file systems as well as maintaining VFS. VFS > > became so complicated over the years that there are maybe few people > > that can understand it, and every single change to VFS is a huge risk of > > potentially breaking some unrelated parts. >=20 > I think this is questionable due to the following assets: > - If the problem is filesystems writers having trouble in > understanding the necessary locking we should really provide cleaner > and more complete documentation. One would think the same with our VM > subsystem, but at least in that case there is plenty of comments that > help understanding how to deal with vm_object, vm_pages locking during > their lifelines. > - Our primitives may be more complicated than the > 'all-in-the-filesystem' one, but at least they offer a complete and > centralized view over the resources we have allocated in the whole > system and they allow building better policies about how to manage > them. One problem I see here, is that those policies are not fully > implemented, tuned or just got outdated, removing one of the highest > beneficial that we have by making vnodes so generic >=20 > About the thing I mentioned myself: > - As long as the same path now has both range-locking and vnode > locking I don't see as a good idea to keep both separated forever. > Merging them seems to me an important evolution (not only helping > shrinking the number of primitives themselves but also introducing > less overhead and likely rewamped scalability for vnodes (but I think > this needs a deep investigation). The proper direction to move there is to designate the vnode lock for the vnode structure protection, and have the range lock protect the i/o atomicity. This is somewhat done in the proposed patch (since now vnode lock does not protect the i/o operation, but only chunked i/o transactions inside the operation). The Jeff idea of using page cache as the source of i/o data (implemented in the VM6 patchset) pushes the idea much further. E.g., the write does not obtain the write vnode lock typically (but sometimes it had, to extend the vnode). Probably, I will revive VM6 after this change is landed. > - About ZFS rangelocks absorbing the VFS ones, I think this is a minor > point, but still, if you think it can be done efficiently and without > loosing performance I don't see why not do that. You already wrote > rangelocks for ZFS, so you are have earned a big experience in this > area and can comment on fallouts, etc., but I don't see a good reason > to not do that, unless it is just too difficult. This is not about > generalizing a new mechanism, it is using a general mechanism in a > specific implementation, if possible. ZFS rangelocks, as I understand from the cursory look, have completely different purpose. Or rather, they protect a different object then the rangelocks added in the proposed patch. Completely different question is the merge of _implementation_ of rangelocks, e.g. throwing out the naive code in kern_rangelock.c is fine with me. But we do not take CDDL-licensed implementation in kern/. --96Fzkco6h7mx3UO9 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (FreeBSD) iEYEARECAAYFAk9JTKsACgkQC3+MBN1Mb4h3AACdG6Y+lo4HO/up1IuOAv/kEHpD lisAoK6DVcaOuQT1HAhnEh44fzTLkikv =1+zm -----END PGP SIGNATURE----- --96Fzkco6h7mx3UO9--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120225210339.GM55074>