From owner-freebsd-fs@FreeBSD.ORG Sun Jun 9 12:46:18 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id E0CAAA93 for ; Sun, 9 Jun 2013 12:46:18 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from relay5-d.mail.gandi.net (relay5-d.mail.gandi.net [217.70.183.197]) by mx1.freebsd.org (Postfix) with ESMTP id 82ACA178E for ; Sun, 9 Jun 2013 12:46:18 +0000 (UTC) Received: from mfilter14-d.gandi.net (mfilter14-d.gandi.net [217.70.178.142]) by relay5-d.mail.gandi.net (Postfix) with ESMTP id 500ED41C067; Sun, 9 Jun 2013 14:46:07 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at mfilter14-d.gandi.net Received: from relay5-d.mail.gandi.net ([217.70.183.197]) by mfilter14-d.gandi.net (mfilter14-d.gandi.net [10.0.15.180]) (amavisd-new, port 10024) with ESMTP id WFI7CVzrHShu; Sun, 9 Jun 2013 14:46:05 +0200 (CEST) X-Originating-IP: 76.102.14.35 Received: from jdc.koitsu.org (c-76-102-14-35.hsd1.ca.comcast.net [76.102.14.35]) (Authenticated sender: jdc@koitsu.org) by relay5-d.mail.gandi.net (Postfix) with ESMTPSA id 1C89141C060; Sun, 9 Jun 2013 14:46:05 +0200 (CEST) Received: by icarus.home.lan (Postfix, from userid 1000) id 4114D73A1C; Sun, 9 Jun 2013 05:46:03 -0700 (PDT) Date: Sun, 9 Jun 2013 05:46:03 -0700 From: Jeremy Chadwick To: Dmitry Morozovsky Subject: Re: /tmp: change default to mdmfs and/or tmpfs? Message-ID: <20130609124603.GA35681@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 09 Jun 2013 12:46:18 -0000 On Sun, Jun 09, 2013 at 03:45:28PM +0400, Dmitry Morozovsky wrote: > Dear colleagues, > > what do you think about stop using precious disk or even SSD resources for > /tmp? > > For last several (well, maybe over 10?) years I constantly use md (swap-backed) > for /tmp, usually 128M in size, which is enough for most of our server needs. > Some require more, but none more than 512M. Regarding the options, we use > tmpmfs_flags="-S -n -o async -b 4096 -f 512" Hold up. Let's start with what you just gave. Everything I'm talking about below is for stable/9 by the way: 1. grep -r tmpfs /etc returns nothing, so I don't know where this magic comes from, 2. tmpfs(5) documents none of these flags, and the flags you've given cannot be mdconfig(8) flags because: a) -S requires a sector size (you specified none), b) -n would have no bearing given the context, c) -o async applies only to vnode-backed models (default is malloc, and I see no -t vnode), d) There is no -b flag, e) The -f flag is for -t vnode only, and refers to a filename for the vnode-backing store. So consider me very, very confused with what you've given. Maybe the flags were different on FreeBSD 6.x or 7.x or 8.x? I haven't checked http://www.freebsd.org/cgi/man.cgi yet. > Given more and more fixes/improvements committed to tmpfs, switching /tmp to it > would be even better idea. > > You thoughts? Thank you! As I understand it, there are (or were -- because I remember seeing them repeatedly brought up on the mailing lists) problems with tmpfs. Sometimes these issues would turn out to be with other filesystems (such as unionfs), but other times not so much. If my memory serves me correct, there are major complexities with VM/memory management when intermixing tmpfs + ZFS + UFS on a system***. Skimming lists and my memory, I come across these (and I recommend anyone replying please read the full thread from that post onward): http://lists.freebsd.org/pipermail/freebsd-current/2011-June/025459.html http://lists.freebsd.org/pipermail/freebsd-current/2011-June/025461.html http://lists.freebsd.org/pipermail/freebsd-fs/2013-January/016165.html Be aware the -current thread posts I linked come from a thread started asking if tmpfs should "really still be considered experimental or not". Then there's this, which shows issues getting MFC'd to stable/9 but not 8.x, so one may want to be very careful about decisions where tmpfs gets used by default going forward (but keep reading): http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/139312 http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/159418 http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/155411 http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/171626 However PR 155411 claims the issue happens on 9.0-RELEASE as well, and PR 139312 even mentions/brings up ZFS -- I have no idea what "State: patched" means (is it fixed? Is it committed? Why isn't the PR closed? etc.) I also see this: http://forums.freebsd.org/archive/index.php/t-30467.html Where someone stated that excessive ARC usage on ZFS had an indirect effect on tmpfs. r233769 to stable/9 may have fixed this, but given the history of all of this "juggling" of Feature X causing memory exhaustion for Feature Y, and in turn affecting Feature Z, all within kernel space, I really don't know how much I can trust all of this. One should probably review the FreeBSD forums for other posts as well, as gut feeling says there's probably more there too. Now some more generic items: tmpfs does not retain data across reboots -- that's by design, of course. I have concerns with regards to stuff that may end up in /tmp that *should* persist across reboots and may surprise an administrator that the files he/she placed in /tmp + reboot no longer appear. While this may be considered a social problem of sorts, it definitely requires one to reconsider use of /tmp (instead /var/tmp, for example) for certain tasks. In closing: If you want to make bsdinstall ask/prompt the administrator "would you like to use tmpfs for /tmp?", then I'm all for it -- sounds good to me. But doing it by default would be something (at this time) I would not be in favour of. I just don't get the impression of stability from tmpfs given its track record. (Yes, I am paranoid in this regard) *** -- For example I personally have experienced strange behaviour when ZFS+UFS are used on the same system with massive amounts of I/O being done between the two (my experience showed the ZFS ARC suddenly limiting itself in a strange manner, to some abysmally small limit (much lower than arc_max)). In this case, I can only imagine tmpfs making things "even worse" given added memory pressure and so on. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Making life hard for others since 1977. PGP 4BD6C0CB |