From owner-freebsd-fs@FreeBSD.ORG  Tue Dec 20 10:03:47 2011
Return-Path: <owner-freebsd-fs@FreeBSD.ORG>
Delivered-To: freebsd-fs@FreeBSD.ORG
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34])
	by hub.freebsd.org (Postfix) with ESMTP id 96FA91065679
	for <freebsd-fs@FreeBSD.ORG>; Tue, 20 Dec 2011 10:03:47 +0000 (UTC)
	(envelope-from peterjeremy@acm.org)
Received: from fallbackmx06.syd.optusnet.com.au
	(fallbackmx06.syd.optusnet.com.au [211.29.132.8])
	by mx1.freebsd.org (Postfix) with ESMTP id 224D68FC13
	for <freebsd-fs@FreeBSD.ORG>; Tue, 20 Dec 2011 10:03:45 +0000 (UTC)
Received: from mail16.syd.optusnet.com.au (mail16.syd.optusnet.com.au
	[211.29.132.197])
	by fallbackmx06.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id
	pBK7vOew020514
	for <freebsd-fs@FreeBSD.ORG>; Tue, 20 Dec 2011 18:57:27 +1100
Received: from server.vk2pj.dyndns.org
	(c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103])
	by mail16.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id
	pBK7vHnQ031874
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 20 Dec 2011 18:57:18 +1100
X-Bogosity: Ham, spamicity=0.000000
Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1])
	by server.vk2pj.dyndns.org (8.14.5/8.14.4) with ESMTP id pBK7vFaR035981;
	Tue, 20 Dec 2011 18:57:15 +1100 (EST)
	(envelope-from peter@server.vk2pj.dyndns.org)
Received: (from peter@localhost)
	by server.vk2pj.dyndns.org (8.14.5/8.14.4/Submit) id pBK7vFoU035980;
	Tue, 20 Dec 2011 18:57:15 +1100 (EST) (envelope-from peter)
Date: Tue, 20 Dec 2011 18:57:14 +1100
From: Peter Jeremy <peterjeremy@acm.org>
To: Hugo Silva <hugo@barafranca.com>
Message-ID: <20111220075714.GA35787@server.vk2pj.dyndns.org>
References: <4EEF321E.5090806@barafranca.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="x+6KMIRAuhnl3hBn"
Content-Disposition: inline
In-Reply-To: <4EEF321E.5090806@barafranca.com>
X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: freebsd-fs@FreeBSD.ORG
Subject: Re: ZFS: root pool considerations, multiple pools on the same disk
X-BeenThere: freebsd-fs@freebsd.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Filesystems <freebsd-fs.freebsd.org>
List-Unsubscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-fs>,
	<mailto:freebsd-fs-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-fs>
List-Post: <mailto:freebsd-fs@freebsd.org>
List-Help: <mailto:freebsd-fs-request@freebsd.org?subject=help>
List-Subscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-fs>,
	<mailto:freebsd-fs-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 20 Dec 2011 10:03:47 -0000


--x+6KMIRAuhnl3hBn
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On 2011-Dec-19 12:46:22 +0000, Hugo Silva <hugo@barafranca.com> wrote:
>I've been thinking about whether it makes sense to separate the rpool
>from the data pool(s)..

I think it does.  I have 6 1TB disks with 8GB carved off the front of
each disk for root & swap.  I initially used a separate (gmirrored)
UFS root (including /usr/src and /usr/obj) because I didn't completely
trust ZFS.  I've since moved to a 3-way mirrored ZFS root, with the
"root" area of the remaining 3 disks basically spare (I use them for
upgrades).  The bulk of the disks form a 6-way RAIDZ2 data pool.

I still think having a separate root makes sense because it should
simplify recovery if everything goes pear-shaped.

>One idea would be creating a 4-way mirror on small partitions for the
>rpool (sturdier), and a zfs raid-10 on the remaining larger partition.

I'd recommend having two 2-way mirrored root pools that you update
alternately.  There are a couple of failure modes where it can be
difficult to difficult to get back to a known working state without
a second boot/root.

>I'm curious about the performance implications (if any) of having >1
>zpools on the same disks (considering that during normal usage, it'll be
>the data pool seeing 99.999% of the action) and whether anyone has
>thought the same and/or applied this concept in production.

I haven't done any performance comparisons but would expect this to
be similar to having multiple UFS filesystems on one disk.

--=20
Peter Jeremy

--x+6KMIRAuhnl3hBn
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)

iEYEARECAAYFAk7wP9oACgkQ/opHv/APuIeHlACfTT4yQqQFZCYpf1TZ3Y5B407L
JIUAnR8dueaWQfZ9hGpv7gPIwgyP6mcM
=Niwg
-----END PGP SIGNATURE-----

--x+6KMIRAuhnl3hBn--