Date: Mon, 16 Mar 2015 11:46:43 +0200 From: Konstantin Belousov <kostikbel@gmail.com> To: Mateusz Guzik <mjguzik@gmail.com> Cc: freebsd-fs@freebsd.org Subject: Re: atomic v_usecount and v_holdcnt Message-ID: <20150316094643.GZ2379@kib.kiev.ua> In-Reply-To: <20150314225226.GA15302@dft-labs.eu> References: <20141122002812.GA32289@dft-labs.eu> <20141122092527.GT17068@kib.kiev.ua> <20141122211147.GA23623@dft-labs.eu> <20141124095251.GH17068@kib.kiev.ua> <20150314225226.GA15302@dft-labs.eu>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Mar 14, 2015 at 11:52:26PM +0100, Mateusz Guzik wrote: > On Mon, Nov 24, 2014 at 11:52:52AM +0200, Konstantin Belousov wrote: > > On Sat, Nov 22, 2014 at 10:11:47PM +0100, Mateusz Guzik wrote: > > > On Sat, Nov 22, 2014 at 11:25:27AM +0200, Konstantin Belousov wrote: > > > > On Sat, Nov 22, 2014 at 01:28:12AM +0100, Mateusz Guzik wrote: > > > > > The idea is that we don't need an interlock as long as we don't > > > > > transition either counter 1->0 or 0->1. > > > > I already said that something along the lines of the patch should work. > > > > In fact, you need vnode lock when hold count changes between 0 and 1, > > > > and probably the same for use count. > > > > > > > > > > I don't see why this would be required (not that I'm an VFS expert). > > > vnode recycling seems to be protected with the interlock. > > > > > > In fact I would argue that if this is really needed, current code is > > > buggy. > > Yes, it is already (somewhat) buggy. > > > > Most need of the lock is for the case of counts coming from 1 to 0. > > The reason is the handling of the active vnode list, which is used > > for limiting the amount of vnode list walking in syncer. When hold > > count is decremented to 0, vnode is removed from the active list. > > When use count is decremented to 0, vnode is supposedly inactivated, > > and vinactive() cleans the cached pages belonging to vnode. In other > > words, VI_OWEINACT for dirty vnode is sort of bug. > > > > Modified the patch to no longer have the usecount + interlock dropped + > VI_OWEINACT set window. > > Extended 0->1 hold count + vnode not locked window remains. I can fix > that if it is really necessary by having _vhold return with interlock > held if it did such transition. In v_upgrade_usecount(), you call v_incr_devcount() without without interlock held. What prevents the devfs vnode from being recycled, in particular, from invalidation of v_rdev pointer ? I think that refcount_acquire_if_greater() KPI is excessive. You always calls acquire with val == 0, and release with val == 1. WRT to _refcount_release_lock, why is lock_object->lc_lock/lc_unlock KPI cannot be used ? This allows to make refcount_release_lock() a function instead of gcc extension macros. Not to mention that the macro is unused.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150316094643.GZ2379>