From owner-freebsd-jail@FreeBSD.ORG Sat Feb 13 01:51:30 2010 Return-Path: Delivered-To: freebsd-jail@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EE34C1065679 for ; Sat, 13 Feb 2010 01:51:30 +0000 (UTC) (envelope-from cryx-freebsd@h3q.com) Received: from mail.h3q.com (mail.h3q.com [213.73.89.199]) by mx1.freebsd.org (Postfix) with ESMTP id 56B4A8FC14 for ; Sat, 13 Feb 2010 01:51:30 +0000 (UTC) Received: (qmail 71848 invoked from network); 13 Feb 2010 01:51:29 -0000 Received: from mail.h3q.com (HELO mail.h3q.com) (cryx) by mail.h3q.com with AES256-SHA encrypted SMTP; 13 Feb 2010 01:51:29 -0000 Message-ID: <4B76059F.9010700@h3q.com> Date: Sat, 13 Feb 2010 02:51:27 +0100 From: Philipp Wuensche User-Agent: Postbox 1.1.1 (Macintosh/20100208) MIME-Version: 1.0 To: Merijn Verstraaten References: <4B75F83E.4000400@h3q.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Christer Solskogen , freebsd-jail@freebsd.org Subject: Re: Fwd: Jailcfg - A new tool for creating small(!) jails X-BeenThere: freebsd-jail@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "Discussion about FreeBSD jail\(8\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 13 Feb 2010 01:51:31 -0000 Merijn Verstraaten wrote: > On Sat, 13 Feb 2010 01:54:22 +0100, Philipp Wuensche > wrote: >>> The only data that is collected after that is user data which is a good >>> thing with no extra cost of system mount points and disk usage. >> >> Thats only true until the first update of the freebsd-userland inside >> the jail. The moment you need to update the freebsd-userland inside the >> jail, it will use additional space and all the advantages of this idea >> are gone. > > This is true, but not much of a problem in practice. As you already explained, this heavily depends on what your practice is! If you are in full control of each and every jail you run, this is a possible practice. If you run a shared server with lots of people managing the installed ports in their jails on their own, this may get complicated as you need to take into account different settings for ports, configuration files in odd locations, userdata outside the nullfs mount etc.pp. This setup also requires you to restart every jail for even minor userland updates or you need to start syncing those minor updates into every jail. Can be automated, of course. >> Using clone will also create a direct dependency between the snapshots >> and the cloned filesystems. As long as the clone exists, the snapshot >> has to be kept. This is only resolvable by using zfs send/recv which >> will, again, use additional space. > > I don't really see how the dependency is an issue. Could you perhaps > explain how/why this matters? In your setup this doesn't, as you nuke & pave and mount userdata via nullfs, thats the keypoint here. But people tend to think a cloned filesystem is independent from its snapshot and start to use it this way. Common pitfall is using snapshot, clone and rollback: % zfs create exports/zones/base % zfs snapshot exports/zones/base@RELEASE-p1 # *updatemagic* % zfs snapshot exports/zones/base@RELEASE-p2 % zfs clone exports/zones/base@RELEASE-p2 export/zones/jail % zfs rollback exports/zones/base@RELEASE-p1 cannot rollback to 'exports/zones/base@RELEASE-p1': more recent snapshots exist use '-r' to force deletion of the following snapshots: exports/zones/base@RELEASE-p2 % zfs rollback -r exports/zones/base@RELEASE-p1 cannot rollback to 'exports/zones/base@RELEASE-p1': clones of previous snapshots exist use '-R' to force deletion of the following clones and dependents: export/zones/jail If "export/zones/jail" includes userdata, yes in your setup it doesn't, then you have a problem. greetings, philipp