Date: Tue, 31 May 2011 19:29:29 +0300 From: Andriy Gapon <avg@FreeBSD.org> To: Steven Hartland <killing@multiplay.co.uk> Cc: freebsd-fs@FreeBSD.org Subject: Re: ZFS: arc_reclaim_thread running 100%, 8.1-RELEASE, LBOLT related Message-ID: <4DE51769.9060907@FreeBSD.org> In-Reply-To: <7F79B120F4ED415F8BB9EB7A4483AF8D@multiplay.co.uk> References: <0EFD28CD-F2E9-4AE2-B927-1D327EC99DB9@bitgravity.com><BANLkTikVq0-En7=4Dy_dTf=tM55Cqou_mw@mail.gmail.com> <4DE50811.5060606@FreeBSD.org> <7F79B120F4ED415F8BB9EB7A4483AF8D@multiplay.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
on 31/05/2011 18:57 Steven Hartland said the following: > ----- Original Message ----- >> However, the arc_reclaim_thread does not have a ~24 day rollover - it >> does not use clock_t. I think this rollover in the integer results >> in LBOLT going negative, after about 106-107 days. We haven't noticed >> this until actually 112-115 days of uptime. I think it is also related >> to L1 ARC sizing, and load. Our systems with arc set to min-max of >> 512M/2G ARC haven't developed the issue - at least the CPU hogging thread >> - but the systems with 12G+ of ARC, and lots of rsync and du activity >> along side of random reads from the zpool develop the issue. > > > Looks like we had this on machine today which had only been up 66 days. Sorry, but 'looks' is not very definitive. > A reboot cleared it, but 66 days up time is nearly half previously reported > making it a bit more serious. It could have been some other bug or something else altogether. Without proper debugging/investigation it's impossible to tell. -- Andriy Gapon
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4DE51769.9060907>