Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 01 Jul 2018 16:53:03 +0900
From:      ht7261@ot.x0.to
To:        freebsd-stable@freebsd.org
Subject:   zfs rootfs mount failure after upgrade to 11.2R from 11.1R-p10
Message-ID:  <201807010753.w617r38G092240@www1607.sakura.ne.jp>

index | next in thread | raw e-mail

Hello,

zfs rootfs mount fails for me after an upgrade from
11.1R-p10 to the recent 11.2R.

I upgraded my amd64 11.1R-p10 system to 11.2R using
"freebsd-update upgrade".

On the first reboot after the first "freebsd-update install",
the system fails when the new kernel is about to mount its
root filesystem.

--------------------
 :
FreeBSD 11.2-RELEASE #0 r335510: Fri Jun 22 04:32:14 UTC 2018
    root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64
 :
 :
ukbd0 on uhub1
ukbd0: <ATEN Advance Tech Inc. USB DVI KVM, class 0/0, rev 1.10/1.00, addr 3> on usbus0
kbd2 at ukbd0
Mounting from zfs:zpool1/ROOT/default failed with error 6; retrying for 3 more seconds
Mounting from zfs:zpool1/ROOT/default failed with error 6.

Loader variables:
  vfs.root.mountfrom=zfs:zpool1/ROOT/default

 :
 :
mountroot> ?

List of GEOM managed disk devices:
  gpt/zfs0 msdosfs/EFI gpt/swap0 msdosfs/EFISYS gpt/EFI nvd0p3 nvd0p2 nvd0p1 cd0 nvd0

mountroot>
--------------------

If I choose 'kernel.old' at the Beastie menu, the system
boots the previous 11.1-p10 fine. (This was also updated
 from previous 11.1-pX by freebsd-update.)
Below is the case for single user mode boot that I tried
AFTER the above mentioned error.

--------------------
 :
FreeBSD 11.1-RELEASE-p10 #0 Tue May  8 05:21:56 UTC 2018
    root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64
 :
 :
ukbd0 on uhub1
ukbd0: <ATEN Advance Tech Inc. USB DVI KVM, class 0/0, rev 1.10/1.00, addr 3> on usbus0
kbd2 at ukbd0
Enter full pathname of shell or RETRUN for /bin/sh:
Cannot read termcap database;
using dumb terminal settings.
# mount
zpool1/ROOT/default on / (zfs, local, noatime, read-only, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
# zpool status
  pool: zpool1
 state: ONLINE
  scan: scrub repaired 0 in (..snip..)
config:

        NAME        STATE     READ WRITE CKSUM
        zpool1      ONLINE      0      0     0
          nvd0p3    ONLINE      0      0     0

errors: No known data errors
# gpart show
=>       40   500118112  nvd0  GPT  (238G)
         40        1600     1  efi  (800K)
       1640         480        - free -  (204K)
       2048  134217728      2  freebsd-swap  (64G)
  134219776  365897728      3  freebsd-zfs  (174G)
  500117504        648         - free -  (324K)

--------------------

No hardware configuration nor BIOS/UEFI settings were touched in
between the two cases.

Were there anything that might have caused this problem
between 11.1 and 11.2?

This is an UEFI-only setup with NVMe SSD configured for
root-on-zfs, zfs-only boot. and it is an 'Intel Core
i5-7600T @ 2.00GHz' system on 'ASRock H270 Pro4' M/B
with 64GB RAM, and 'PM951 NVMe SAMSUNG 256GB' NVMe-SSD,
in case if it matters.

I am going to create another filesystem on this pool, set it
as bootfs in zpool property, and install a pristine OS there
to see what happens.

In the meantime, am I safe to run:
# freebsd-upgrade rollback
while the system is up in kernel.old (11.1p10) mode,
to get my system back to boot 11.1p10 by default?

Thank you in advance. 
Hiroharu


home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201807010753.w617r38G092240>