From owner-freebsd-fs@FreeBSD.ORG Mon Sep 6 00:21:53 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 74A9910656B5 for ; Mon, 6 Sep 2010 00:21:53 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-iw0-f182.google.com (mail-iw0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 2F7C68FC13 for ; Mon, 6 Sep 2010 00:21:52 +0000 (UTC) Received: by iwn34 with SMTP id 34so4234517iwn.13 for ; Sun, 05 Sep 2010 17:21:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type:content-transfer-encoding; bh=GAVA+MWMvRAAmmqcNSbOgVRLME1KPXzdFiAfWbyGmpo=; b=A7B5oBl2/JOF33He4x24XDEhtUfPrNZeEhsEbK2rIEgbWcGZaLG8ak3sG4Y0nfGx4w p7MfNGQGo99x+rClUMV2Zrk7gWJTF0so3wUdwHtstOaf3q49uJM4EK+lwJLT9KRYGIo/ 7VK/7ruQbyoMsHkrAXgegZdBFooPjvzKZFdL0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; b=ukznXNUcaFECMtJyPsNoiPZkVqj6+wr14k5KqLN0uQ70xbOLVh7FC2NSc7WratsGKt 0neLro0JklbCBzGU0CsfHbMqkq5XkXuBXwettQI5USCvEOwCxot0cA5khCXqGSYVlB+b 4zPkuVKl7ZgDrDGX4kjngSz2ySNOHjvjcB09o= Received: by 10.231.183.10 with SMTP id ce10mr5291121ibb.96.1283732512392; Sun, 05 Sep 2010 17:21:52 -0700 (PDT) Received: from centel.dataix.local ([99.181.137.20]) by mx.google.com with ESMTPS id i6sm2332413iba.8.2010.09.05.17.21.50 (version=SSLv3 cipher=RC4-MD5); Sun, 05 Sep 2010 17:21:51 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C84341D.8060708@DataIX.net> Date: Sun, 05 Sep 2010 20:21:49 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.8) Gecko/20100806 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C825D65.3040004@DataIX.net> <7EA7AD058C0143B2BF2471CC121C1687@multiplay.co.uk> <1F64110BFBD5468B8B26879A9D8C94EF@multiplay.co.uk> <4C83A214.1080204@DataIX.net> <06B9D23F202D4DB88D69B7C4507986B7@multiplay.co.uk> <4C842905.2080602@DataIX.net> <330B5DB2215F43899ABAEC2CF71C2EE0@multiplay.co.uk> In-Reply-To: <330B5DB2215F43899ABAEC2CF71C2EE0@multiplay.co.uk> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Sep 2010 00:21:53 -0000 On 09/05/2010 19:57, Steven Hartland wrote: > >> On 09/05/2010 16:13, Steven Hartland wrote: >>>> 3656: uint64_t available_memory = >>>> ptoa((uintmax_t)cnt.v_free_count 3657: + >>>> cnt.v_cache_count); >> >>> earlier at 3614 I have what I think your after which is: uint64_t >>> available_memory = ptoa((uintmax_t)cnt.v_free_count); >> >> Alright change this to the above, recompile and re-run your tests. >> Effectively before this change that apparently still needs to be >> MFC'd or MFS'd would not allow ZFS to look at or use >> cnt.v_cache_count. Pretty much to sum it up "available mem = cache >> + free" >> >> This possibly could cause what your seeing but there might be >> other changes still yet TBD. Ill look into what else has changed >> from RELEASE -> STABLE. >> >> Also do you check out your sources with svn(1) or csup(1) ? > > Based on Jeremy's comments I'm updating the box the stable. Its > building now but will be the morning before I can reboot to activate > changes as I need to deactivate the stream instance and wait for all > active connections to finish. > > That said the problem doesn't seem to be cache + free but more cache > + free + inactive with inactive being the large chunk, so not sure > this change would make any difference? > If I remember correctly I thought that was already calculated into the mix but I could be wrong. I remember a discussion about it before that free was inactive + free, and for ARC the cache was never being accounted for so not enough paging was happening which would result in a situation like the one you have now. MAYBE! > How does ufs deal with this, does it take inactive into account? > Seems a bit silly for inactive pages to prevent reuse for extended > periods when the memory could be better used as cache. > I agree commented above. > As an experiment I compiled a little app which malloced a large block > of memory, 1.3G in this case and then freed it. This does indeed pull > the memory out of inactive and back into the free pool where zfs is > which happy to re-expand arc and once again cache large files. Seems > a bit extreme to have to do this though. Maybe we should add that code to zfs(1) and call it with gimme-my-mem-back 1 for all of it 2 for half of it and 3 for panic ;) > > Will see what happens with stable tomorrow though :) > Good luck Steve, Look forward to hearing the result. If you are happy with the result you get from stable/8 I would reccommend patching to v15 which is much more stable than the v14 code. The specific patches you would want are: (in order) http://people.freebsd.org/~mm/patches/zfs/v15/stable-8-v15.patch http://people.freebsd.org/~mm/patches/zfs/zfs_metaslab_v2.patch http://people.freebsd.org/~mm/patches/zfs/zfs_abe_stat_rrwlock.patch and then the needfree.patch I already posted. The maxusers.patch being optional. -- jhell,v