Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 Dec 2012 00:34:11 +0200
From:      Kimmo Paasiala <kpaasial@gmail.com>
To:        Andriy Gapon <avg@freebsd.org>
Cc:        FreeBSD current <freebsd-current@freebsd.org>, FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   Re: [HEADSUP] zfs root pool mounting
Message-ID:  <CA%2B7WWScyQurr94XsGdyYwDP5DFpAd0wh-U5B73anXgxDb7t=Bw@mail.gmail.com>
In-Reply-To: <50B6598B.20200@FreeBSD.org>
References:  <50B6598B.20200@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Nov 28, 2012 at 8:35 PM, Andriy Gapon <avg@freebsd.org> wrote:
>
> Recently some changes were made to how a root pool is opened for root filesystem
> mounting.  Previously the root pool had to be present in zpool.cache.  Now it is
> automatically discovered by probing available GEOM providers.
> The new scheme is believed to be more flexible.  For example, it allows to prepare
> a new root pool at one system, then export it and then boot from it on a new
> system without doing any extra/magical steps with zpool.cache.  It could also be
> convenient after zpool split and in some other situations.
>
> The change was introduced via multiple commits, the latest relevant revision in
> head is r243502.  The changes are partially MFC-ed, the remaining parts are
> scheduled to be MFC-ed soon.
>
> I have received a report that the change caused a problem with booting on at least
> one system.  The problem has been identified as an issue in local environment and
> has been fixed.  Please read on to see if you might be affected when you upgrade,
> so that you can avoid any unnecessary surprises.
>
> You might be affected if you ever had a pool named the same as your current root
> pool.  And you still have any disks connected to your system that belonged to that
> pool (in whole or via some partitions).  And that pool was never properly
> destroyed using zpool destroy, but merely abandoned (its disks
> re-purposed/re-partitioned/reused).
>
> If all of the above are true, then I recommend that you run 'zdb -l <disk>' for
> all suspect disks and their partitions (or just all disks and partitions).  If
> this command reports at least one valid ZFS label for a disk or a partition that
> do not belong to any current pool, then the problem may affect you.
>
> The best course is to remove the offending labels.
>
> If you are affected, please follow up to this email.
>
> --
> Andriy Gapon
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"

Hello,

What is the status of the MFC process to 9-STABLE? I'm on 9-STABLE
r244407, should I be able to boot from this ZFS pool without
zpool.cache?

zpool status
  pool: zwhitezone
 state: ONLINE
  scan: scrub repaired 0 in 0h53m with 0 errors on Sat Dec 15 23:41:09 2012
config:

        NAME               STATE     READ WRITE CKSUM
        zwhitezone         ONLINE       0     0     0
          mirror-0         ONLINE       0     0     0
            label/wzdisk0  ONLINE       0     0     0
            label/wzdisk1  ONLINE       0     0     0
          mirror-1         ONLINE       0     0     0
            label/wzdisk2  ONLINE       0     0     0
            label/wzdisk3  ONLINE       0     0     0

errors: No known data errors



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2B7WWScyQurr94XsGdyYwDP5DFpAd0wh-U5B73anXgxDb7t=Bw>