From owner-freebsd-current@FreeBSD.ORG Thu Dec 13 11:24:48 2012 Return-Path: Delivered-To: freebsd-current@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 097F5798 for ; Thu, 13 Dec 2012 11:24:48 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 50BCF8FC14 for ; Thu, 13 Dec 2012 11:24:46 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id NAA14576; Thu, 13 Dec 2012 13:24:44 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50C9BAFC.4020804@FreeBSD.org> Date: Thu, 13 Dec 2012 13:24:44 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Garrett Cooper Subject: Re: [HEADSUP] zfs root pool mounting References: <50B6598B.20200@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.6 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: FreeBSD Current X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Dec 2012 11:24:48 -0000 on 07/12/2012 02:55 Garrett Cooper said the following: > If I try and let it import the pool at boot it claims the pool is in a > FAULTED state when I point mountroot to /dev/cd0 (one of gjb's > snapshot CDs -- thanks!), run service hostid onestart, etc. If I > export and try to reimport the pool it claims it's not available (!). > However, if I boot, run service hostid onestart, _then_ import the > pool, then the pool is imported properly. This sounds messy, not sure if it has any informative value. I think I've seen something like this after some reason ZFS import from upsteam when my kernel and userland were out of sync. Do you do a full boot from the livecd? Or do you boot your kernel but then mount userland from the cd? In any case, not sure if this is relevan to your main trouble. > While I was mucking around with the pool trying to get the system to > boot I set the cachefile attribute to /boot/zfs/zpool.cache before > upgrading. In order to diagnose whether or not that was at fault, I > set that back to none and I'm still running into the same issue. > > I'm going to try backing out your commit and rebuild my kernel in > order to determine whether or not that's at fault. > > One other thing: both my machines have more than one ZFS-only zpool, > and it might be probing the pools in the wrong order; one of the pools > has bootfs set, the other doesn't, and the behavior is sort of > resembling it not being set properly. bootfs property should not better. Multi-pool configurations has been tested before the commit. -- Andriy Gapon