From owner-freebsd-fs@FreeBSD.ORG Mon Oct 8 16:27:43 2007 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 994F516A417; Mon, 8 Oct 2007 16:27:43 +0000 (UTC) (envelope-from cb@severious.net) Received: from ion.gank.org (ion.gank.org [69.55.238.164]) by mx1.freebsd.org (Postfix) with ESMTP id 81EA313C45D; Mon, 8 Oct 2007 16:27:43 +0000 (UTC) (envelope-from cb@severious.net) Received: by ion.gank.org (Postfix, from userid 1001) id 32799115D7; Mon, 8 Oct 2007 11:27:43 -0500 (CDT) Date: Mon, 8 Oct 2007 11:27:42 -0500 From: Craig Boston To: Bakul Shah Message-ID: <20071008162730.GA98555@nowhere> Mail-Followup-To: Craig Boston , Bakul Shah , Pawel Jakub Dawidek , freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org References: <20071005180119.GE98210@garage.freebsd.pl> <20071006174614.E1D575B52@mail.bitblocks.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20071006174614.E1D575B52@mail.bitblocks.com> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org, Pawel Jakub Dawidek Subject: Re: ZFS kmem_map too small. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Oct 2007 16:27:43 -0000 On Sat, Oct 06, 2007 at 10:46:14AM -0700, Bakul Shah wrote: > There two differences: R1.11 arc_reclaim_needed() returns 1 when 80% > of kmem is used, while R1.10 does so at 50% of kmem. I'll bet it's this change that provokes the problem; as even when manually tuning the ARC size to 1/2 kmem_size or lower I still sometimes get panics. Probably what's happening is kmem usage gets high from other things, and when there's a sudden spike zfs can't react fast enough and shrink the ARC. At 50% it acts more conservatively so there's more memory available for burst usage. I noticed that some of the time, on my mostly-stable system, the panic happens when the nvidia driver is trying to allocate a 128K chunk. vmstat -m only shows nvidia at ~12MB total though, so I think it just gets hit because it malloc/frees large blocks more than most subsystems. The 512MB memory one doesn't run X at all, but it's by far the least stable of the bunch. Unfortunately it doesn't seem to want to create crash dumps for some reason. > Still, fiddling with limits to make the panic go away seems > to somehow miss the point as I always worry it will show up > under other conditions. May be there a way to ensure that > kmem_map is never too small or may be zfs can reserve a few > resources for its own use so that it can get out of a tight > spot? I don't think having ZFS reserve resources would help, as at least for me the kmem_map panic doesn't always happen within ZFS code. It's just that the increased kernel memory demands from ZFS are causing it to run out at times. I still think the best course is to have ZFS's cache use VM objects like the buffer cache does, but I know this is a very nontrivial thing to do. Craig