From owner-freebsd-fs@FreeBSD.ORG Mon Oct 3 18:34:00 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7D86A106566C; Mon, 3 Oct 2011 18:34:00 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id A2BBD8FC23; Mon, 3 Oct 2011 18:33:59 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 4AED5A8E4A5; Mon, 3 Oct 2011 20:33:58 +0200 (CEST) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 19.6271] X-CRM114-CacheID: sfid-20111003_20335_5B6892E0 X-CRM114-Status: Good ( pR: 19.6271 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Mon Oct 3 20:33:58 2011 X-DSPAM-Confidence: 0.9971 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 4e8a0016584931011434026 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00010, FreeBSD, 0.00054, FreeBSD, 0.00054, >+I, 0.00087, >+On, 0.00099, >+the, 0.00137, to+>, 0.00171, to+>, 0.00171, bug, 0.00198, of+>, 0.00236, cache, 0.00256, wrote+>>, 0.00267, >+and, 0.00279, )+>, 0.00279, References*mail.gmail.com>, 0.00295, References*mail.gmail.com>, 0.00295, I+>, 0.00341, >+that, 0.00361, >+of, 0.00361, >>+>>, 0.00409, In-Reply-To*mail.gmail.com>, 0.00454, root, 0.00454, wrote, 0.00495, STABLE, 0.00556, STABLE, 0.00556, /tmp, 0.00612, X-Spambayes-Classification: ham; 0.00 Received: from [192.168.3.2] (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id 53A02A8E498; Mon, 3 Oct 2011 20:33:57 +0200 (CEST) Message-ID: <4E8A0013.4070008@fsn.hu> Date: Mon, 03 Oct 2011 20:33:55 +0200 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Artem Belevich References: <20111002020231.GA70864@icarus.home.lan> <4E899C8E.7040305@fsn.hu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Adrian Chadd , delphij@freebsd.org Subject: Re: is TMPFS still highly experimental? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Oct 2011 18:34:00 -0000 On 10/03/2011 04:58 PM, Artem Belevich wrote: >> For me, the bug is still here: >> $ uname -a >> FreeBSD b 8.2-STABLE FreeBSD 8.2-STABLE #5: Wed Sep 14 15:01:25 CEST 2011 >> root@buildervm:/data/usr/obj/data/usr/src/sys/BOOTCLNT amd64 >> $ df -h /tmp >> Filesystem Size Used Avail Capacity Mounted on >> tmpfs 0B 0B 0B 100% /tmp >> >> I have no swap configured. The machine has 64 GB RAM. >> vm.kmem_size=60G; vfs.zfs.arc_max=55G; vfs.zfs.arc_min=20G > I'm curious -- does your ARC size ever reaches configured limit of > 55G? My hunch that it's probably hovering around some noticeably lower > number. Yes, in some minutes. Current counters: kstat.zfs.misc.arcstats.c_min: 21474836480 kstat.zfs.misc.arcstats.c_max: 59055800320 kstat.zfs.misc.arcstats.size: 45691792856 > On my ZFS setups a lot of memory seems to be lost due to > fragmentation. On a system with 24G of RAM and rac_max=16G, I > typically see more than 20G of memory wired. > With kmem_size=60G, ARC is likely to use up most of available kmem > space and that's probably what affects tmpfs. Besides, with kmem_size > that close to arc_max you may be risking meeting "kmem too small" > panic, though, considering that your kmem_size is rather large chances > of that are smaller than on a system with smaller amounts of memory > and kmem_size. Sounds plausible. BTW, it may be possible that the ARC limits are not needed anymore, they are here from the times, where on a 64 GB machine ARC hovered around 2-5 GBs without setting these (arc_min was even higher then). BTW, the user space programs fit into around 1-2 GB RAM on this machine typically. Well, most of the time. :) > I'd start with doubling kmem_size and, possibly, reducing arc_max to > the point where it stops putting pressure on tmpfs. > I know there are several differences, but it would be very good to have similar behaviour with UFS. I guess it's quite evident that tmpfs can eat the file system cache, and I know it may be not so trivial to solve this with ZFS. :) Will try it, thanks.