From owner-freebsd-stable@FreeBSD.ORG Wed Sep 2 16:40:35 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D99B1065670 for ; Wed, 2 Sep 2009 16:40:35 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yx0-f202.google.com (mail-yx0-f202.google.com [209.85.210.202]) by mx1.freebsd.org (Postfix) with ESMTP id 402078FC1B for ; Wed, 2 Sep 2009 16:40:34 +0000 (UTC) Received: by yxe40 with SMTP id 40so21172yxe.13 for ; Wed, 02 Sep 2009 09:40:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=X6uCCo/EfAELj4pjdfAP/65T9Gi862BOkF6krpzQ6ak=; b=UfGIBZNXpC8v9zqJzJ3v4FqFL4rLlE4vopjkp9+yOo1pe/KRlntdTIlSYxo3VH/n9c cKDNHg7P3X0UGvVqHPDj7fDaS9ivpMBPZ5InNzu9jvi23qz/wjgXoPf6eE/Q9tPaxrQO QqSehie+WSu9rIL/CAMDARW8pur++yQmOo8hU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=DptuGvv1xa0Lq2H/7bCsdbZ5wT+DpTn1vPF3C3HZeI1ZzUXV9oQ54Rhg5oksVv0j4L p9LY7f6dGiaEFpApJExBMkQw5u1Wczu4iahtxUFKR1MGR3uHuANzA0UONInC3nmUuPvd IiaIJv0wOi+K7tQJ54ybZYikde9ioyzf1Gjn0= MIME-Version: 1.0 Received: by 10.150.99.9 with SMTP id w9mr14178980ybb.216.1251909634434; Wed, 02 Sep 2009 09:40:34 -0700 (PDT) In-Reply-To: <4A9E3181.9030601@mapper.nl> References: <061541E3-F301-46C4-8ECB-5B05854F0EAA@exscape.org> <4A9D558A.9070609@quip.cz> <4A9E1CB5.6030906@mapper.nl> <20090902074445.GA13588@dmr.ath.cx> <4A9E2C7C.6030904@mapper.nl> <4A9E3181.9030601@mapper.nl> Date: Wed, 2 Sep 2009 09:40:34 -0700 Message-ID: From: Freddie Cash To: FreeBSD Stable Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: zfs on gmirror slice X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Sep 2009 16:40:35 -0000 On Wed, Sep 2, 2009 at 1:49 AM, Mark Stapper wrote: > Thomas Backman wrote: > > On Sep 2, 2009, at 10:27 AM, Mark Stapper wrote: > >> > > Nothing a LiveCD or something to that regard can't handle. Obviously > > this doesn't work for everyone, but it should for many. > Actually it won't because updating zfs comes with updating your world. > After updating you world, you will be running a newer ZFS version then > the one that come with the RELEASE install, hence the need to update > your zfs filesystems. Incidentally, the livefs CD contains the "old" zfs > version. You see where I'm going? > The new version of the ZFS tools (zpool/zfs) will work with older versions of the on-disk formats (pool and filesystem). Thus, you can boot into an 8.0 system while using ZFSv6 pool from a 7.2 system. Upgrading the world only upgrades the tools, it doesn't upgrade the on-disk format. Once you *manually* upgrade your filesystems and pools to ZFSv13, then you can no longer access them on older systems. Thus, there's no issue. You can start with a FreeBSD 7.x system running ZFSv6, upgrade it via buildworld to FreeBSD 8.0, and continue running your ZFSv6 pool and filesystems. Sometime in the future, you can then upgrade the pool and filesystems to ZFSv13, and continue on your merry way. ZFS provides backward compatibility, and doesn't automatically upgrade your pools or filesystems. > > If ZFS finds a corrupted copy and a non-corrupted one in a mirrored > > ZFS pool, it will repair the damage so that both copies are valid, so > > yes, self-healing will indeed occur. :) > > I'm feeling Shakespearean again... My point was that I find > "self-healing" too magic sounding. While indeed it means:"Automatic data > error detection and rebuilding" or something along those lines. We don't > call the automatic remapping of bad sectors in HDD's "self-healing" do we? > Alas, I must admit that on file-system level, some "healing" does take > place. > You should read some of the ZFS white papers and blog postings to better understand what "self-healing" in ZFS is all about. It's a lot more than "automatically rebuild arrays" or "automatically re-map bad sectors". And it works at the individual file level (possibly the data block level) instead of the "entire disk" level that gmirror works. For fun, use dd to zero out some random sectors on a drive that's part of a gmirror array and a ZFS mirror, and see what happens. ;) -- Freddie Cash fjwcash@gmail.com