From owner-freebsd-stable@FreeBSD.ORG Tue Sep 28 22:22:59 2010 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F401A106566B; Tue, 28 Sep 2010 22:22:58 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 0E61D8FC08; Tue, 28 Sep 2010 22:22:57 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id BAA12308; Wed, 29 Sep 2010 01:22:45 +0300 (EEST) (envelope-from avg@icyb.net.ua) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1P0iZR-0000Lq-6i; Wed, 29 Sep 2010 01:22:45 +0300 Message-ID: <4CA26AB4.3050108@icyb.net.ua> Date: Wed, 29 Sep 2010 01:22:44 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100918 Lightning/1.0b2 Thunderbird/3.1.4 MIME-Version: 1.0 To: Ben Kelly References: <4CA1D06C.9050305@digiware.nl> <20100928115047.GA62142__15392.0458550148$1285675457$gmane$org@icarus.home.lan> <4CA1DDE9.8090107@icyb.net.ua> <20100928132355.GA63149@icarus.home.lan> <4CA1EF69.4040402@icyb.net.ua> <4CA21809.7090504@icyb.net.ua> <71D54408-4B97-4F7A-BD83-692D8D23461A@wanderview.com> <4CA22337.2010900@icyb.net.ua> <4CA25E92.4060904@icyb.net.ua> <5BD33772-C0EA-48A9-BE9A-C8FBAF0008D7@wanderview.com> In-Reply-To: <5BD33772-C0EA-48A9-BE9A-C8FBAF0008D7@wanderview.com> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: stable@freebsd.org, Willem Jan Withagen , fs@freebsd.org, Jeremy Chadwick Subject: Re: Still getting kmem exhausted panic X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Sep 2010 22:22:59 -0000 on 29/09/2010 01:01 Ben Kelly said the following: > Thanks. Yea, there is a lot of aggressive tuning there. In particular, the > slow growth algorithm is somewhat dubious. What I found, though, was that > the fragmentation jumped whenever the arc was reduced in size, so it was an > attempt to make the size slowly approach peak load without overshooting. > > A better long term solution would probably be to enhance UMA to support > custom slab sizes on a zone-by-zone basis. That way all zfs/arc allocations > can use slabs of 128k (at a memory efficiency penalty of course). I > prototyped this with a dumbed down block pool allocator at one point and was > able to avoid most, if not all, of the fragmentation. Adding the support to > UMA seemed non-trivial, though. BTW, have you seen my posts about UMA and ZFS on hackers@ ? I found it advantageous to use UMA for ZFS I/O buffers, but only after reducing size of per-CPU caches for the zones with large-sized items. I further modified the code in my local tree to completely disable per-CPU caches for items > 32KB. -- Andriy Gapon