From owner-freebsd-current@FreeBSD.ORG Thu Jul 16 23:07:16 2009 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3BE12106566B for ; Thu, 16 Jul 2009 23:07:16 +0000 (UTC) (envelope-from louie@transsys.com) Received: from ringworld.transsys.com (ringworld.transsys.com [144.202.0.15]) by mx1.freebsd.org (Postfix) with ESMTP id 130038FC0A for ; Thu, 16 Jul 2009 23:07:16 +0000 (UTC) (envelope-from louie@transsys.com) Received: by ringworld.transsys.com (Postfix, from userid 1001) id 5DC735C4C; Thu, 16 Jul 2009 19:07:15 -0400 (EDT) Date: Thu, 16 Jul 2009 19:07:15 -0400 From: Louis Mamakos To: Freddie Cash Message-ID: <20090716230715.GA46760@ringworld.transsys.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.18 (2008-05-17) Cc: freebsd-current@freebsd.org Subject: Re: ZFS pool corrupted on upgrade of -current (probably sata renaming) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Jul 2009 23:07:16 -0000 On Wed, Jul 15, 2009 at 03:19:30PM -0700, Freddie Cash wrote: > > Hrm, you might need to do this from single-user mode, without the ZFS > filesystems mounted, or the drives in use. Or from a LiveFS CD, if /usr is > a ZFS filesystem. > > On our ZFS hosts, / and /usr are on UFS (gmirror). I don't understand why you'd expect you could take an existing container on a disk, like a FreeBSD slice with some sort of live data within it, and just decide you're going to take a way one or more blocks at the end to create a new container within it? If you look at page 7 of the ZFS on-disk format document that was recently mentioned, you'll see that ZFS stores 4 copies of it's "Vdev label"; two at the front of the physical vdev and two at the end of the Vdev, each of them apparently 256kb in length. That's assuming that ZFS doens't round down the size of the Vdev to some convienient boundary. It is going to get upset that the Vdev just shrunk out from under it? I've always thought of glabel as creating a new (named) container within some existing physical or logical container. This notion that you can just create one inside an existing container with live data seems dangerous. Or at a minimum, depending on some other property of the existing live data that would lead you to believe it can do without it's last block. louie