From owner-freebsd-current@FreeBSD.ORG Sat Apr 18 07:39:13 2009 Return-Path: Delivered-To: current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B4099106564A; Sat, 18 Apr 2009 07:39:13 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from redbull.bpaserver.net (redbullneu.bpaserver.net [213.198.78.217]) by mx1.freebsd.org (Postfix) with ESMTP id 6FAAB8FC08; Sat, 18 Apr 2009 07:39:13 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from outgoing.leidinger.net (pD9E2DC61.dip.t-dialin.net [217.226.220.97]) by redbull.bpaserver.net (Postfix) with ESMTP id 09C302E068; Sat, 18 Apr 2009 09:39:06 +0200 (CEST) Received: from unknown (IO.Leidinger.net [192.168.2.103]) by outgoing.leidinger.net (Postfix) with ESMTP id 2410FC2B67; Sat, 18 Apr 2009 09:38:59 +0200 (CEST) Date: Sat, 18 Apr 2009 09:38:57 +0200 From: Alexander Leidinger To: ticso@cicely.de Message-ID: <20090418093857.0000199a@unknown> In-Reply-To: <20090417141817.GR11551@cicely7.cicely.de> References: <20090417145024.205173ighmwi4j0o@webmail.leidinger.net> <20090417141817.GR11551@cicely7.cicely.de> X-Mailer: Claws Mail 3.7.1 (GTK+ 2.10.13; i586-pc-mingw32msvc) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BPAnet-MailScanner-Information: Please contact the ISP for more information X-MailScanner-ID: 09C302E068.61563 X-BPAnet-MailScanner: Found to be clean X-BPAnet-MailScanner-SpamCheck: not spam, ORDB-RBL, SpamAssassin (not cached, score=-14.4, required 6, BAYES_00 -15.00, L_HELLO_ADDRESS 0.50, RDNS_DYNAMIC 0.10) X-BPAnet-MailScanner-From: alexander@leidinger.net X-Spam-Status: No Cc: current@freebsd.org, fs@freebsd.org Subject: Re: ZFS: unlimited arc cache growth? X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 18 Apr 2009 07:39:14 -0000 On Fri, 17 Apr 2009 16:18:17 +0200 Bernd Walter wrote: > On Fri, Apr 17, 2009 at 02:50:24PM +0200, Alexander Leidinger wrote: > > Hi, > > > > to fs@, please CC me, as I'm not subscribed. > > > > I monitored (by hand) a while the sysctls > > kstat.zfs.misc.arcstats.size and kstat.zfs.misc.arcstats.hdr_size. > > Both grow way higher (at some point I've seen more than 500M) than > > what I have configured in vfs.zfs.arc_max (40M). > > My understanding about this is the following: > vfs.zfs.arc_min/max are not used as min max values. > They are used as high/low watermarks. > If arc is more than max the arc a thread is triggered to reduce the > arc cache until min, but in the meantime other threads can still grow > arc so there is a race between them. 500M (more than 10 times my max) after a night seems to be a big race... > > After a while FS operations (e.g. pkgdb -F with about 900 > > packages... my specific workload is the fixup of gnome packages > > after the removal of the obsolete libusb port) get very slow (in my > > specific example I let the pkgdb run several times over night and > > it still is not finished). > > I've seen many workloads were prefetching can saturate disks without > ever being used. > You might want to try disabling prefetch. > Of course prefetching also grows arc. Prefetching is already disabled in this case. > > The big problem with this is, that at some point in time the > > machine reboots (panic, page fault, page not present, during a > > fork1). I have the impression (beware, I have a watchdog > > configured, as I don't know if a triggered WD would cause the same > > panic, the following is just a guess) that I run out of memory of > > some kind (I have 1G RAM, i386, max kmem size 700M). I restarted > > pkgdb several times after a reboot, and it continues to process the > > libusb removal, but hey, this is anoying. > > With just 700M kmem you should set arc values extremly small and > avoid anything which can quickly grow it. > Unfortunately accessing many small files is a know arc filling > workload. Activating vfs.zfs.cache_flush_disable can help speeding up > arc decreasing, with the obvous risks of course... I have this: ---snip--- vfs.zfs.prefetch_disable=1 vm.kmem_size="700M" vm.kmem_size_max="700M" vfs.zfs.arc_max="40M" vfs.zfs.vdev.cache.size="5M" vfs.zfs.vdev.cache.bshift="13" # device read ahead: 8k vfs.zfs.vdev.max_pending="6" # congruent request to the device, + for NCQ ---snip--- Bye, Alexander.