From owner-freebsd-questions@freebsd.org Mon May 2 20:42:59 2016 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A78D4B2ABEE for ; Mon, 2 May 2016 20:42:59 +0000 (UTC) (envelope-from sebastian@wolfgarten.com) Received: from waldfest.wolfgarten.com (waldfest.wolfgarten.com [144.76.61.234]) by mx1.freebsd.org (Postfix) with ESMTP id 36D4916E5 for ; Mon, 2 May 2016 20:42:58 +0000 (UTC) (envelope-from sebastian@wolfgarten.com) Received: from waldfest (localhost [127.0.0.1]) by waldfest.wolfgarten.com (Postfix) with ESMTP id 1AB6B63D5F; Mon, 2 May 2016 22:42:58 +0200 (CEST) X-Virus-Scanned: amavisd-new at wolfgarten.com X-Spam-Flag: NO X-Spam-Score: -1.514 X-Spam-Level: X-Spam-Status: No, score=-1.514 tagged_above=-9999 required=5 tests=[ALL_TRUSTED=-1, BAYES_00=-1.9, BODY_RULE_1=1, TW_LR=0.077, TW_RW=0.077, TW_WX=0.077, TW_XR=0.077, TW_ZF=0.077, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no Received: from waldfest.wolfgarten.com ([127.0.0.1]) by waldfest (waldfest.wolfgarten.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id UuLOvTsNkLbY; Mon, 2 May 2016 22:42:49 +0200 (CEST) Received: from [192.168.0.159] (unknown [84.119.55.188]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by waldfest.wolfgarten.com (Postfix) with ESMTPSA id EDEDA63D59; Mon, 2 May 2016 22:42:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=wolfgarten.com; s=mail; t=1462221769; bh=TRmax8iCpgIR+eoTY3GGPY1W/dpc1qw9O9uDRfqlPQU=; h=Subject:From:In-Reply-To:Date:References:To; b=RfktB6Ran4UG8C6g4Vpcbnps3/52yO06naYzLElNykdkOrdrzX26Zh5ygRyE/Wqtx 5eb5KfcijZZ8LJcLiZd2Oo13jLIzDzXSolH6Tc2f1hgibFX4yKlKovZC++PC14Cozm oYAzRLGpASWxBnzBj3jmiScnrWTJ2wrlb2OVCrAg= Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: ZFS migration - New pool lost after reboot From: Sebastian Wolfgarten In-Reply-To: <2D936447-34C1-471B-8787-8075B19F8B28@wolfgarten.com> Date: Mon, 2 May 2016 22:42:47 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <6E1B2BCF-3B5C-4D18-9152-FE68711B2B43@wolfgarten.com> References: <0A383C91-FCBA-4B9E-A95A-157A13708125@wolfgarten.com> <72087b33-53f9-e298-1441-4988c2a5ecb3@fechner.net> <2D936447-34C1-471B-8787-8075B19F8B28@wolfgarten.com> To: Matthias Fechner , freebsd-questions@freebsd.org X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 May 2016 20:42:59 -0000 Hi, just to follow-up on my own email earlier on - I managed to get the new = pool booting by amending /boot/loader.conf as follows: root@vm:~ # cat /boot/loader.conf vfs.root.mountfrom=3D"zfs:newpool/ROOT/default" kern.geom.label.gptid.enable=3D"2" zfs_load=3D"YES" However, when rebooting I can see he is using the new pool however I am = running into issues as he can=92t seem to find some essential files in = /usr: Mounting local file systems eval: zfs not found eval: touch not found /etc/rc: cannot create /dev/null: No such file or directory /etc/rc: date: not found Here is what =84zfs list=93 looks like: root@vm:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT newpool 385M 5.41G 19K /mnt/zroot newpool/ROOT 385M 5.41G 19K /mnt newpool/ROOT/default 385M 5.41G 385M /mnt newpool/tmp 21K 5.41G 21K /mnt/tmp newpool/usr 76K 5.41G 19K /mnt/usr newpool/usr/home 19K 5.41G 19K /mnt/usr/home newpool/usr/ports 19K 5.41G 19K /mnt/usr/ports newpool/usr/src 19K 5.41G 19K /mnt/usr/src newpool/var 139K 5.41G 19K /mnt/var newpool/var/audit 19K 5.41G 19K /mnt/var/audit newpool/var/crash 19K 5.41G 19K /mnt/var/crash newpool/var/log 44K 5.41G 44K /mnt/var/log newpool/var/mail 19K 5.41G 19K /mnt/var/mail newpool/var/tmp 19K 5.41G 19K /mnt/var/tmp zroot 524M 26.4G 96K /zroot zroot/ROOT 522M 26.4G 96K none zroot/ROOT/default 522M 26.4G 522M / zroot/tmp 74.5K 26.4G 74.5K /tmp zroot/usr 384K 26.4G 96K /usr zroot/usr/home 96K 26.4G 96K /usr/home zroot/usr/ports 96K 26.4G 96K /usr/ports zroot/usr/src 96K 26.4G 96K /usr/src zroot/var 580K 26.4G 96K /var zroot/var/audit 96K 26.4G 96K /var/audit zroot/var/crash 96K 26.4G 96K /var/crash zroot/var/log 103K 26.4G 103K /var/log zroot/var/mail 96K 26.4G 96K /var/mail zroot/var/tmp 92.5K 26.4G 92.5K /var/tmp=20 I am assuming I have to amend the zfs parameters for the mount points = but I can=92t seem to figure out what=92s wrong. I tried things like: zfs set mountpoint=3D/usr newpool/usr zfs set mountpoint=3D/tmp newpool/tmp zfs set mountpoint=3D/var newpool/var Unfortunately this did not solve the issue. Any ideas? Many thanks. Best regards Sebastian > Am 02.05.2016 um 21:43 schrieb Sebastian Wolfgarten = : >=20 > Hi Matthias, > dear list, >=20 > I have build a new VM to test this further without affecting my live = machine. When doing all these steps (including the amendment of = loader.conf on the new pool), my system will boots up with the old pool. = Any ideas why? >=20 > Here is what I did: >=20 > 1) Create required partitions on temporary hard disk ada2 > gpart create -s GPT ada2 > gpart add -t freebsd-boot -s 128 ada2 > gpart add -t freebsd-swap -s 4G -l newswap ada2 > gpart add -t freebsd-zfs -l newdisk ada2 > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2 >=20 > 2) Create new pool (newpool) >=20 > zpool create -o cachefile=3D/tmp/zpool.cache newpool gpt/newdisk >=20 > 3) Create snapshot of existing zroot pool and copy it over to new pool=20= > zfs snapshot -r zroot@movedata > zfs send -vR zroot@movedata | zfs receive -vFd newpool > zfs destroy -r zroot@movedata >=20 > 4) Make the new pool bootable > =09 > zpool set bootfs=3Dnewpool/ROOT/default newpool >=20 > 5) Mount new pool and prepare for reboot >=20 > cp /tmp/zpool.cache /tmp/newpool.cache > zpool export newpool > zpool import -c /tmp/newpool.cache -R /mnt newpool > cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache > in /mnt/boot/loader.conf the value of kern.geom.label.gptid.enable=3D=84= 0=93 changed to =842"=20 > zfs set mountpoint=3D/ newpool/ROOT > reboot >=20 > After the reboot, the machine is still running of the old zfs striped = mirror but I can mount the newpool without any problems: >=20 > root@vm:~ # cat /boot/loader.conf > kern.geom.label.gptid.enable=3D"0" > zfs_load=3D"YES" > root@vm:~ # zpool import -c /tmp/newpool.cache -R /mnt newpool > root@vm:~ # cd /mnt > root@vm:/mnt # ls -la > total 50 > drwxr-xr-x 19 root wheel 26 May 2 23:33 . > drwxr-xr-x 18 root wheel 25 May 2 23:37 .. > -rw-r--r-- 2 root wheel 966 Mar 25 04:52 .cshrc > -rw-r--r-- 2 root wheel 254 Mar 25 04:52 .profile > -rw------- 1 root wheel 1024 May 2 01:45 .rnd > -r--r--r-- 1 root wheel 6197 Mar 25 04:52 COPYRIGHT > drwxr-xr-x 2 root wheel 47 Mar 25 04:51 bin > -rw-r--r-- 1 root wheel 9 May 2 23:27 bla > drwxr-xr-x 8 root wheel 47 May 2 01:44 boot > drwxr-xr-x 2 root wheel 2 May 2 01:32 dev > -rw------- 1 root wheel 4096 May 2 23:21 entropy > drwxr-xr-x 23 root wheel 107 May 2 01:46 etc > drwxr-xr-x 3 root wheel 52 Mar 25 04:52 lib > drwxr-xr-x 3 root wheel 4 Mar 25 04:51 libexec > drwxr-xr-x 2 root wheel 2 Mar 25 04:51 media > drwxr-xr-x 2 root wheel 2 Mar 25 04:51 mnt > drwxr-xr-x 2 root wheel 2 May 2 23:33 newpool > dr-xr-xr-x 2 root wheel 2 Mar 25 04:51 proc > drwxr-xr-x 2 root wheel 147 Mar 25 04:52 rescue > drwxr-xr-x 2 root wheel 7 May 2 23:27 root > drwxr-xr-x 2 root wheel 133 Mar 25 04:52 sbin > lrwxr-xr-x 1 root wheel 11 Mar 25 04:52 sys -> usr/src/sys > drwxrwxrwt 6 root wheel 7 May 2 23:33 tmp > drwxr-xr-x 16 root wheel 16 Mar 25 04:52 usr > drwxr-xr-x 24 root wheel 24 May 2 23:21 var > drwxr-xr-x 2 root wheel 2 May 2 01:32 zroot > root@vm:/mnt # cd boot > root@vm:/mnt/boot # cat loader.conf > kern.geom.label.gptid.enable=3D"2" > zfs_load=3D=84YES" >=20 > My question is: How do I make my system permanently boot off the = newpool such that I can destroy the existing zroot one? >=20 > Many thanks for your help, it is really appreciated. >=20 > Best regards > Sebastian >=20 >> Am 29.04.2016 um 10:25 schrieb Matthias Fechner : >>=20 >> Am 28.04.2016 um 23:14 schrieb Sebastian Wolfgarten: >>> 5) Mount new pool and prepare for reboot >>>=20 >>> cp /tmp/zpool.cache /tmp/newpool.cache >>> zpool export newpool >>> zpool import -c /tmp/newpool.cache -R /mnt newpool >>> cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache >>> zfs set mountpoint=3D/ newpool/ROOT >>> reboot >>=20 >> I think you forgot to adapt vfs.zfs.mountfrom=3D in /boot/loader.conf = on the new pool? >>=20 >>=20 >>=20 >> Gru=DF >> Matthias >>=20 >> --=20 >>=20 >> "Programming today is a race between software engineers striving to >> build bigger and better idiot-proof programs, and the universe trying = to >> produce bigger and better idiots. So far, the universe is winning." = -- >> Rich Cook >=20 > _______________________________________________ > freebsd-questions@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to = "freebsd-questions-unsubscribe@freebsd.org"