Date: Sat, 16 May 2009 12:40:44 -0400 From: Ben Kelly <ben@wanderview.com> To: Adam McDougall <mcdouga9@egr.msu.edu> Cc: freebsd-current@freebsd.org, Artem Belevich <fbsdlist@src.cx> Subject: Re: [patch] zfs livelock and thread priorities Message-ID: <5D988481-068A-4AB3-952E-255BEA1D9DA7@wanderview.com> In-Reply-To: <20090516031332.GG82547@egr.msu.edu> References: <08D7DC2A-68BE-47B6-8D5D-5DE6B48F87E5@wanderview.com> <AC3C4C3F-40C6-4AF9-BAF3-2C4D1E444839@wanderview.com> <ed91d4a80904142135n429dea52o672abf51116fa707@mail.gmail.com> <ed91d4a80904241816r28531a04r2dc70fa8960d430e@mail.gmail.com> <bc2d970904241947r50576efbgc93164a9e4dd297d@mail.gmail.com> <ed91d4a80904242059n3642a40aud55df6d1b6a1695@mail.gmail.com> <FC83DB1E-6C08-4BD4-8BC9-437D714FEE9E@wanderview.com> <ed91d4a80904271839l49420c8rbcfd52dd6e72eb83@mail.gmail.com> <ed91d4a80904281111q3b9a3c45vc9fcf129dde8c10d@mail.gmail.com> <F86D3461-3ABD-4A56-B9A6-36857364DF4B@wanderview.com> <20090516031332.GG82547@egr.msu.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
On May 15, 2009, at 11:13 PM, Adam McDougall wrote: > On Tue, Apr 28, 2009 at 04:52:23PM -0400, Ben Kelly wrote: > On Apr 28, 2009, at 2:11 PM, Artem Belevich wrote: >> My system had eventually deadlocked overnight, though it took much >> longer than before to reach that point. >> >> In the end I've got many many processes sleeping in zio_wait with no >> disk activity whatsoever. >> I'm not sure if that's the same issue or not. >> >> Here are stack traces for all processes -- http://pastebin.com/f364e1452 >> I've got the core saved, so if you want me to dig out some more info, >> let me know if/how I could help. > > It looks like there is a possible deadlock between zfs_zget() and > zfs_zinactive(). They both acquire a lock via ZFS_OBJ_HOLD_ENTER(). > The zfs_zinactive() path can get called indirectly from within > zio_done(). The zfs_zget() can in turn block waiting for > zio_done()'s > completion while holding the object lock. > > The following patch might help: > > http://www.wanderview.com/svn/public/misc/zfs/zfs_zinactive_deadlock.diff > > This simply bails out of the inactive processing if the object lock > is > already held. I'm not sure if this is 100% correct or not as it > cannot verify there are references to the vnode. I also tried > executing the zfs_zinactive() logic in a taskqueue to avoid the > deadlock, but that caused other deadlocks to occur. > > Hope that helps. > > - Ben > > Its my understanding that the deadlock was fixed in -current, > how does that affect the usefulness of the thread priorities > patch? Should I continue testing it or is it effectively a > NOOP now? As far as I know the vnode release deadlock is unrelated to the thread prioritization patch. The particular problem I ran into that caused me to look at the priorities was a livelock. When the arc cache got low on memory sometimes user and txg threads would begin messaging each other in a seemingly infinite pattern waiting for space to be freed. Unfortunately, these threads were simultaneously starving out the spa_zio threads from actually flushing data to the disks. This effectively blocked all disk related activity and would wedge the box when the syncer process got into the mix as well. This condition doesn't happen on opensolaris because their use of explicit priorities ensures that the spa_zio threads take precedence over user and txg threads. Beyond this particular scenario it seems possible that there are other priority related problems lurking. ZFS in opensolaris is either explicitly or implicitly designed around the different threads having certain relative priorities. While it seems to mostly work without these priorities we are definitely opening ourselves up to untested corner cases by ignoring the prioritization. The one downside I have noticed to setting zfs thread priorities explicitly is a reduction in interactivity during heavy disk load. This is somewhat to be expected since the spa_zio threads are running at a higher priority than user threads. This has been an issue on opensolaris as well: http://bugs.opensolaris.org/view_bug.do?bug_id=6586537 The bug states that a fix is available, but I haven't had a chance to go back and see what they ended up doing to make things more responsive. Currently the thread priority patch for freebsd is a proof of concept. If people think its a valid approach I can try to clean it up so that it could be committed. The two main issues with it right now are: 1) It changes the kproc(9) API by adding a kproc_create_priority() function that allows you to set the priority of the newly created thread. I'm not sure how people feel about this. 2) It makes the opensolaris thread_create() function take freebsd priority values and sets the constants maxclsyspri and minclsyspri to somewhat arbitrary values. This means that if someone ports other opensolaris code over and passes priority values to thread_create without using these constants they will get unexpected behavior. This could be addressed by creating a mapping function from opensolaris priorities to freebsd priorities. > Also, I've been doing some fairly intense testing of zfs in > recent -current and I am tracking down a situation where > performance gets worse but I think I found a workaround. > I am gathering more data regarding the cause, workaround, > symptoms, and originating commit and will post about it soon. I'd be interested to here more about this. Thanks! - Ben
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5D988481-068A-4AB3-952E-255BEA1D9DA7>