Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 2 May 2016 21:43:43 +0200
From:      Sebastian Wolfgarten <sebastian@wolfgarten.com>
To:        Matthias Fechner <idefix@fechner.net>, freebsd-questions@freebsd.org
Subject:   Re: ZFS migration - New pool lost after reboot
Message-ID:  <2D936447-34C1-471B-8787-8075B19F8B28@wolfgarten.com>
In-Reply-To: <72087b33-53f9-e298-1441-4988c2a5ecb3@fechner.net>
References:  <0A383C91-FCBA-4B9E-A95A-157A13708125@wolfgarten.com> <72087b33-53f9-e298-1441-4988c2a5ecb3@fechner.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Matthias,
dear list,

I have build a new VM to test this further without affecting my live =
machine. When doing all these steps (including the amendment of =
loader.conf on the new pool), my system will boots up with the old pool. =
Any ideas why?

Here is what I did:

1) Create required partitions on temporary hard disk ada2
gpart create -s GPT ada2
gpart add -t freebsd-boot -s 128 ada2
gpart add -t freebsd-swap -s 4G -l newswap ada2
gpart add -t freebsd-zfs -l newdisk ada2
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2

2) Create new pool (newpool)

zpool create -o cachefile=3D/tmp/zpool.cache newpool gpt/newdisk

3) Create snapshot of existing zroot pool and copy it over to new pool=20=

zfs snapshot -r zroot@movedata
zfs send -vR zroot@movedata | zfs receive -vFd newpool
zfs destroy -r zroot@movedata

4) Make the new pool bootable
=09
zpool set bootfs=3Dnewpool/ROOT/default newpool

5) Mount new pool and prepare for reboot

cp /tmp/zpool.cache /tmp/newpool.cache
zpool export newpool
zpool import -c /tmp/newpool.cache -R /mnt newpool
cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
in /mnt/boot/loader.conf the value of kern.geom.label.gptid.enable=3D=840=93=
 changed to =842"=20
zfs set mountpoint=3D/ newpool/ROOT
reboot

After the reboot, the machine is still running of the old zfs striped =
mirror but I can mount the newpool without any problems:

root@vm:~ # cat /boot/loader.conf
kern.geom.label.gptid.enable=3D"0"
zfs_load=3D"YES"
root@vm:~ # zpool import -c /tmp/newpool.cache -R /mnt newpool
root@vm:~ # cd /mnt
root@vm:/mnt # ls -la
total 50
drwxr-xr-x  19 root  wheel    26 May  2 23:33 .
drwxr-xr-x  18 root  wheel    25 May  2 23:37 ..
-rw-r--r--   2 root  wheel   966 Mar 25 04:52 .cshrc
-rw-r--r--   2 root  wheel   254 Mar 25 04:52 .profile
-rw-------   1 root  wheel  1024 May  2 01:45 .rnd
-r--r--r--   1 root  wheel  6197 Mar 25 04:52 COPYRIGHT
drwxr-xr-x   2 root  wheel    47 Mar 25 04:51 bin
-rw-r--r--   1 root  wheel     9 May  2 23:27 bla
drwxr-xr-x   8 root  wheel    47 May  2 01:44 boot
drwxr-xr-x   2 root  wheel     2 May  2 01:32 dev
-rw-------   1 root  wheel  4096 May  2 23:21 entropy
drwxr-xr-x  23 root  wheel   107 May  2 01:46 etc
drwxr-xr-x   3 root  wheel    52 Mar 25 04:52 lib
drwxr-xr-x   3 root  wheel     4 Mar 25 04:51 libexec
drwxr-xr-x   2 root  wheel     2 Mar 25 04:51 media
drwxr-xr-x   2 root  wheel     2 Mar 25 04:51 mnt
drwxr-xr-x   2 root  wheel     2 May  2 23:33 newpool
dr-xr-xr-x   2 root  wheel     2 Mar 25 04:51 proc
drwxr-xr-x   2 root  wheel   147 Mar 25 04:52 rescue
drwxr-xr-x   2 root  wheel     7 May  2 23:27 root
drwxr-xr-x   2 root  wheel   133 Mar 25 04:52 sbin
lrwxr-xr-x   1 root  wheel    11 Mar 25 04:52 sys -> usr/src/sys
drwxrwxrwt   6 root  wheel     7 May  2 23:33 tmp
drwxr-xr-x  16 root  wheel    16 Mar 25 04:52 usr
drwxr-xr-x  24 root  wheel    24 May  2 23:21 var
drwxr-xr-x   2 root  wheel     2 May  2 01:32 zroot
root@vm:/mnt # cd boot
root@vm:/mnt/boot # cat loader.conf
kern.geom.label.gptid.enable=3D"2"
zfs_load=3D=84YES"

My question is: How do I make my system permanently boot off the newpool =
such that I can destroy the existing zroot one?

Many thanks for your help, it is really appreciated.

Best regards
Sebastian

> Am 29.04.2016 um 10:25 schrieb Matthias Fechner <idefix@fechner.net>:
>=20
> Am 28.04.2016 um 23:14 schrieb Sebastian Wolfgarten:
>> 5) Mount new pool and prepare for reboot
>>=20
>> cp /tmp/zpool.cache /tmp/newpool.cache
>> zpool export newpool
>> zpool import -c /tmp/newpool.cache -R /mnt newpool
>> cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
>> zfs set mountpoint=3D/ newpool/ROOT
>> reboot
>=20
> I think you forgot to adapt vfs.zfs.mountfrom=3D in /boot/loader.conf =
on the new pool?
>=20
>=20
>=20
> Gru=DF
> Matthias
>=20
> --=20
>=20
> "Programming today is a race between software engineers striving to
> build bigger and better idiot-proof programs, and the universe trying =
to
> produce bigger and better idiots. So far, the universe is winning." --
> Rich Cook




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2D936447-34C1-471B-8787-8075B19F8B28>