From owner-freebsd-fs@FreeBSD.ORG Fri Apr 17 14:36:02 2009 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8F089106577B; Fri, 17 Apr 2009 14:36:02 +0000 (UTC) (envelope-from ticso@cicely7.cicely.de) Received: from raven.bwct.de (raven.bwct.de [85.159.14.73]) by mx1.freebsd.org (Postfix) with ESMTP id 120CD8FC13; Fri, 17 Apr 2009 14:36:01 +0000 (UTC) (envelope-from ticso@cicely7.cicely.de) Received: from cicely5.cicely.de ([10.1.1.7]) by raven.bwct.de (8.13.4/8.13.4) with ESMTP id n3HEIKtw047958 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 17 Apr 2009 16:18:20 +0200 (CEST) (envelope-from ticso@cicely7.cicely.de) Received: from cicely7.cicely.de (cicely7.cicely.de [10.1.1.9]) by cicely5.cicely.de (8.14.2/8.14.2) with ESMTP id n3HEIHqp018223 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 17 Apr 2009 16:18:17 +0200 (CEST) (envelope-from ticso@cicely7.cicely.de) Received: from cicely7.cicely.de (localhost [127.0.0.1]) by cicely7.cicely.de (8.14.2/8.14.2) with ESMTP id n3HEIH5U015759; Fri, 17 Apr 2009 16:18:17 +0200 (CEST) (envelope-from ticso@cicely7.cicely.de) Received: (from ticso@localhost) by cicely7.cicely.de (8.14.2/8.14.2/Submit) id n3HEIHiI015758; Fri, 17 Apr 2009 16:18:17 +0200 (CEST) (envelope-from ticso) Date: Fri, 17 Apr 2009 16:18:17 +0200 From: Bernd Walter To: Alexander Leidinger Message-ID: <20090417141817.GR11551@cicely7.cicely.de> References: <20090417145024.205173ighmwi4j0o@webmail.leidinger.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20090417145024.205173ighmwi4j0o@webmail.leidinger.net> X-Operating-System: FreeBSD cicely7.cicely.de 7.0-STABLE i386 User-Agent: Mutt/1.5.11 X-Spam-Status: No, score=-4.4 required=5.0 tests=ALL_TRUSTED=-1.8, AWL=0.000, BAYES_00=-2.599 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on spamd.cicely.de Cc: current@freebsd.org, fs@freebsd.org Subject: Re: ZFS: unlimited arc cache growth? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: ticso@cicely.de List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Apr 2009 14:36:05 -0000 On Fri, Apr 17, 2009 at 02:50:24PM +0200, Alexander Leidinger wrote: > Hi, > > to fs@, please CC me, as I'm not subscribed. > > I monitored (by hand) a while the sysctls kstat.zfs.misc.arcstats.size > and kstat.zfs.misc.arcstats.hdr_size. Both grow way higher (at some > point I've seen more than 500M) than what I have configured in > vfs.zfs.arc_max (40M). My understanding about this is the following: vfs.zfs.arc_min/max are not used as min max values. They are used as high/low watermarks. If arc is more than max the arc a thread is triggered to reduce the arc cache until min, but in the meantime other threads can still grow arc so there is a race between them. > After a while FS operations (e.g. pkgdb -F with about 900 packages... > my specific workload is the fixup of gnome packages after the removal > of the obsolete libusb port) get very slow (in my specific example I > let the pkgdb run several times over night and it still is not > finished). I've seen many workloads were prefetching can saturate disks without ever being used. You might want to try disabling prefetch. Of course prefetching also grows arc. > The big problem with this is, that at some point in time the machine > reboots (panic, page fault, page not present, during a fork1). I have > the impression (beware, I have a watchdog configured, as I don't know > if a triggered WD would cause the same panic, the following is just a > guess) that I run out of memory of some kind (I have 1G RAM, i386, max > kmem size 700M). I restarted pkgdb several times after a reboot, and > it continues to process the libusb removal, but hey, this is anoying. With just 700M kmem you should set arc values extremly small and avoid anything which can quickly grow it. Unfortunately accessing many small files is a know arc filling workload. Activating vfs.zfs.cache_flush_disable can help speeding up arc decreasing, with the obvous risks of course... > Does someone see something similar to what I describe (mainly the > growth of the arc cache way beyond what is configured)? Anyone with > some ideas what to try? In my opinion the watermark mechanism can work as it is, but there should be a forced max - currently there is no garantied limit at all. Nevertheless it is up for the people which know the code to decide. -- B.Walter http://www.bwct.de Modbus/TCP Ethernet I/O Baugruppen, ARM basierte FreeBSD Rechner uvm.