From owner-freebsd-fs@FreeBSD.ORG Wed Apr 7 07:53:29 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B76A11065673 for ; Wed, 7 Apr 2010 07:53:29 +0000 (UTC) (envelope-from spork@bway.net) Received: from xena.bway.net (xena.bway.net [216.220.96.26]) by mx1.freebsd.org (Postfix) with ESMTP id 5B8CD8FC1A for ; Wed, 7 Apr 2010 07:53:29 +0000 (UTC) Received: (qmail 38996 invoked by uid 0); 7 Apr 2010 07:26:47 -0000 Received: from unknown (HELO ?10.3.2.41?) (spork@96.57.144.66) by smtp.bway.net with (DHE-RSA-AES256-SHA encrypted) SMTP; 7 Apr 2010 07:26:47 -0000 Date: Wed, 7 Apr 2010 03:26:46 -0400 (EDT) From: Charles Sprickman X-X-Sender: spork@hotlap.local To: freebsd-fs@freebsd.org Message-ID: User-Agent: Alpine 2.00 (OSX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Subject: ZFS - best practices, alternate root X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 07:53:29 -0000 Howdy, I'm starting to roll out zfs in production and my primary method for doing any remote rescue work is to do a network boot that loads a mfsroot image with a number of extra tools (including zpool and zfs commands) and loader.conf options that load zfs.ko and opensolaris.ko. I've been documenting a number of the gotchas and little non-obvious things I've found when running a root on zfs setup. One place where I'm getting stuck is working with the zfs root pool when I'm booted off alternate media. For example, I do a network boot and a "zpool list" shows no pools, so I do a "zpool import -f zroot". Is this correct? When I'm done do I need to do an export and import cycle to get things ready for booting off the local zfs pool on reboot? The other little point of confusion is dealing with mounting the zfs root filesystem when in my netboot environment. Mounting say, just the root fs manually (ie: mount -t zfs zroot /mnt) works, but when I unmount it and do a "zfs list", I see that zfs now thinks "/mnt" is the new mountpoint. I've been digging around opensolaris docs, and I'm not seeing what the proper way is to "temporarily" alter a zfs mountpoint. I know that I can manually set it back to legacy root, but it's bad news if I forget that step. Lastly, say I've imported the pool, mounted root, altered something on the mounted zfs filesystem, unmounted it, set the mountpoint back to legacy root, what's the proper way to prep the pool to be ready for my next normal boot. Do I need to do the "zpool export/import" shuffle and copy the /boot/zfs/boot.cache back over in this situation? Thanks, Charles