Date: Sun, 10 Jan 2016 10:01:57 -0500 (EST) From: Rick Macklem <rmacklem@uoguelph.ca> To: FreeBSD Filesystems <freebsd-fs@freebsd.org> Subject: panic ffs_truncate3 (maybe fuse being evil) Message-ID: <1696608910.154845456.1452438117036.JavaMail.zimbra@uoguelph.ca>
next in thread | raw e-mail | index | archive | help
Hi, When fooling around with GlusterFS, I can get this panic intermittently. (I had a couple yesterday.) This happens on a Dec. 5, 2015 head kernel. panic: ffs_truncate3 - backtrace without the numbers (I just scribbled it off the screen) ffs_truncate() ufs_inactive() VOP_INACTIVE_APV() vinactive() vputx() kern_unlinkat() So, at a glance, it seems that either b_dirty.bv_cnt or b_clean.bv_cnt is non-zero. (There is another case for the panic, but I thought it was less likely?) So, I'm wondering if this might be another side effect of r291460, since after that a new vnode isn't completely zero'd out? However, shouldn't bo_dirty.bv_cnt and bo_clean.bv_cnt be zero when a vnode is recycled? Does this make sense or do some fields of v_bufobj need to be zero'd out by getnewvnode()? GlusterFS is using fuse and I suspect that fuse isn't cleaning out the buffers under some circumstance (I already noticed that there isn't any code in its fuse_vnop_reclaim() and I vaguely recall that there are conditions where VOP_INACTIVE() gets skipped, so that VOP_RECLAIM() has to check for anything that would have been done by VOP_INACTIVE() and do it, if it isn't already done.) Anyhow, if others have thoughts on this (or other hunches w.r.t. what could cause this panic(), please let me know. Thanks, rick
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1696608910.154845456.1452438117036.JavaMail.zimbra>