From owner-freebsd-stable@FreeBSD.ORG Tue Sep 28 16:46:44 2010 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2144C106566C; Tue, 28 Sep 2010 16:46:44 +0000 (UTC) (envelope-from ben@wanderview.com) Received: from mail.wanderview.com (mail.wanderview.com [66.92.166.102]) by mx1.freebsd.org (Postfix) with ESMTP id 9C4638FC1E; Tue, 28 Sep 2010 16:46:43 +0000 (UTC) Received: from xykon.in.wanderview.com (xykon.in.wanderview.com [10.76.10.152]) (authenticated bits=0) by mail.wanderview.com (8.14.4/8.14.4) with ESMTP id o8SGkc6j027489 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Tue, 28 Sep 2010 16:46:39 GMT (envelope-from ben@wanderview.com) Mime-Version: 1.0 (Apple Message framework v1081) Content-Type: text/plain; charset=us-ascii From: Ben Kelly In-Reply-To: <4CA21809.7090504@icyb.net.ua> Date: Tue, 28 Sep 2010 12:46:39 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <71D54408-4B97-4F7A-BD83-692D8D23461A@wanderview.com> References: <4CA1D06C.9050305@digiware.nl> <20100928115047.GA62142__15392.0458550148$1285675457$gmane$org@icarus.home.lan> <4CA1DDE9.8090107@icyb.net.ua> <20100928132355.GA63149@icarus.home.lan> <4CA1EF69.4040402@icyb.net.ua> <4CA21809.7090504@icyb.net.ua> To: Andriy Gapon X-Mailer: Apple Mail (2.1081) X-Spam-Score: -1.01 () ALL_TRUSTED,T_RP_MATCHES_RCVD X-Scanned-By: MIMEDefang 2.67 on 10.76.20.1 Cc: stable@freebsd.org, Willem Jan Withagen , fs@freebsd.org, Jeremy Chadwick Subject: Re: Still getting kmem exhausted panic X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Sep 2010 16:46:44 -0000 On Sep 28, 2010, at 12:30 PM, Andriy Gapon wrote: > on 28/09/2010 18:50 Ben Kelly said the following: >>=20 >> On Sep 28, 2010, at 9:36 AM, Andriy Gapon wrote: >>> Well, no time for me to dig through all that history. arc_max should = be a >>> hard limit and it is now. If it ever wasn't then it was a bug. >>=20 >> I believe the size of the arc could exceed the limit if your working = set was >> larger than arc_max. The arc can't (couldn't then, anyway) evict = data that is >> still referenced. >=20 > I think that you are correct and I was wrong. > ARC would still allocate a new buffer even if it's at or above arc_max = and can not > re-use any exisiting buffer. > But I think that this is more likely to happen with "tiny" ARC size. = I have hard > time imagining a workload at which gigabytes of data would be = simultaneously and > continuously used (see below for definition of "used"). >=20 >> A contributing factor at the time was that the page daemon did not = take into >> account back pressure from the arc when deciding which pages to move = from >> active to inactive, etc. So data was more likely to be referenced = and >> therefore forced to remain in the arc. >=20 > I don't think that this is what happened and I don't think that = pagedaemon has > anything to do with the discussed issue. > I think that ARC buffers exist independently of pagedaemon and page = cache. > I think that they are held only during time when I/O is happening to = or from them. Hmm. My server is currently idle with no I/O happening: kstat.zfs.misc.arcstats.c: 25165824 kstat.zfs.misc.arcstats.c_max: 46137344 kstat.zfs.misc.arcstats.size: 91863156 If what you say is true, this shouldn't happen, should it? This system = is an i386 machine with kmem max at 800M and arc set to 40M. This is = running head from April 6, 2010, so it is a bit old, though. At one point I had patches running on my system that triggered the = pagedaemon based on arc load and it did allow me to keep my arc below = the max. Or at least I thought it did. In any case, I've never really been able to wrap my head around the VFS = layer and how it interacts with zfs. So I'm more than willing to = believe I'm confused. Any insights are greatly appreciated. Thanks! - Ben=