From owner-freebsd-fs@freebsd.org Sun Jan 10 23:20:03 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C6D72A6A869 for ; Sun, 10 Jan 2016 23:20:03 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 72D2D1EB1 for ; Sun, 10 Jan 2016 23:20:02 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:mjh4URe5aLT0wJm3qy/ZxrdQlGMj4u6mDksu8pMizoh2WeGdxc6/YB7h7PlgxGXEQZ/co6odzbGG7ea4ASQp2tWojjMrSNR0TRgLiMEbzUQLIfWuLgnFFsPsdDEwB89YVVVorDmROElRH9viNRWJ+iXhpQAbFhi3DwdpPOO9QteU1JTpkbjqs7ToICx2xxOFKYtoKxu3qQiD/uI3uqBFbpgL9x3Sv3FTcP5Xz247bXianhL7+9vitMU7q3cY6Lod8JtjVqPhY60+Ub1eRB4rN2co/8r1/U3AShCT53gWX2E+nR9BAgyD5xb/CMTfqCz/49B80yrSGMT9TrQ5XHz29aJiQxzshSIvKjk27WzTksw2h6sN80HpnAB234OBONLdD/F5ZK6IJd4= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CqBAAK5pJW/61jaINehH+IU7VEhg8CgU8QAQEBAQEBAQGBCYItgggBAQQjVhACAQgOCgICDRkCAlcCBIhBrxGPXwEBAQEBBQEBAQEBAR2BAYVVhH+Hc4FJBY42iF2PN4RDiFxEjgsCOSuEKCCFMoEIAQEB X-IronPort-AV: E=Sophos;i="5.20,550,1444708800"; d="scan'208";a="262106178" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 10 Jan 2016 18:19:56 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 106BA15F55D; Sun, 10 Jan 2016 18:19:56 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id GqHbO14UdJVT; Sun, 10 Jan 2016 18:19:55 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 67E0215F565; Sun, 10 Jan 2016 18:19:55 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id aOyxgddVbpmQ; Sun, 10 Jan 2016 18:19:55 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 4B88015F55D; Sun, 10 Jan 2016 18:19:55 -0500 (EST) Date: Sun, 10 Jan 2016 18:19:55 -0500 (EST) From: Rick Macklem To: Konstantin Belousov Cc: FreeBSD Filesystems Message-ID: <700310221.155153995.1452467995252.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <20160110154518.GU3625@kib.kiev.ua> References: <1696608910.154845456.1452438117036.JavaMail.zimbra@uoguelph.ca> <20160110154518.GU3625@kib.kiev.ua> Subject: Re: panic ffs_truncate3 (maybe fuse being evil) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF43 (Win)/8.0.9_GA_6191) Thread-Topic: panic ffs_truncate3 (maybe fuse being evil) Thread-Index: i5Ow7DLuCEiHdCTEDzqw6FyLZAae9A== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 10 Jan 2016 23:20:03 -0000 Kostik wrote: > On Sun, Jan 10, 2016 at 10:01:57AM -0500, Rick Macklem wrote: > > Hi, > > > > When fooling around with GlusterFS, I can get this panic intermittently. > > (I had a couple yesterday.) This happens on a Dec. 5, 2015 head kernel. > > > > panic: ffs_truncate3 > > - backtrace without the numbers (I just scribbled it off the screen) > > ffs_truncate() > > ufs_inactive() > > VOP_INACTIVE_APV() > > vinactive() > > vputx() > > kern_unlinkat() > > > > So, at a glance, it seems that either > > b_dirty.bv_cnt > > or b_clean.bv_cnt > > is non-zero. (There is another case for the panic, but I thought it > > was less likely?) > > > > So, I'm wondering if this might be another side effect of r291460, > > since after that a new vnode isn't completely zero'd out? > > > > However, shouldn't bo_dirty.bv_cnt and bo_clean.bv_cnt be zero when > > a vnode is recycled? > > Does this make sense or do some fields of v_bufobj need to be zero'd > > out by getnewvnode()? > Look at the _vdrop(). When a vnode is freed to zone, it is asserted > that bufobj queues are empty. I very much doubt that it is possible > to leak either buffers or counters by reuse. > Ok. I'll take a look but, yes, it doesn't sound like the fields could be left bogus when the vnode gets recycled. > > > > GlusterFS is using fuse and I suspect that fuse isn't cleaning out > > the buffers under some circumstance (I already noticed that there > > isn't any code in its fuse_vnop_reclaim() and I vaguely recall that > > there are conditions where VOP_INACTIVE() gets skipped, so that > > VOP_RECLAIM() > > has to check for anything that would have been done by VOP_INACTIVE() > > and do it, if it isn't already done.) > But even if fuse leaves the buffers around, is it UFS which panics for > you ? I would rather worry about dandling pointers and use after free in > fuse, which is a known issue with it anyway. I.e. it could be that fuse > operates on reclaimed and reused vnode as its own. > > > > > Anyhow, if others have thoughts on this (or other hunches w.r.t. what > > could cause this panic(), please let me know. > > The ffs_truncate3 was deterministically triggered by a bug in ffs_balloc(). > The routine allocated buffers for indirect blocks, but if the blocks cannot > be allocated, the buffers where left on queue. See r174973, this was fixed > very long time ago. > But this was a one month old kernel (around r291900, although I don't know the exact r#, but it was Dec. 5, 2015), so it definitely has this fix in it. When I see it again, I will try and see what the v_bufobj fields look like. Thanks, rick