Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 25 Jul 2018 21:39:37 -0700
From:      John Kennedy <warlock@phouka.net>
To:        Andriy Gapon <avg@freebsd.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: CRC32Mounting from zfs:zroot/ROOT/default failed with error 2
Message-ID:  <20180726043937.GC75644@phouka1.phouka.net>
In-Reply-To: <3596ef16-da50-b26c-b7fd-724ca020cba2@FreeBSD.org>
References:  <20180724012745.GB75644@phouka1.phouka.net> <3596ef16-da50-b26c-b7fd-724ca020cba2@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Jul 24, 2018 at 11:33:52AM +0300, Andriy Gapon wrote:
> This seems like possibly the same problem as
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229972
> That bug report also describes getting error 2 (ENOENT).
> Your report adds an additional detail, "unknown file system".
> That means that vfs_byname() cannot find "zfs" filesystem type.
> As-if zfs module was not loaded.

Before I try to make this a bug report, lets sanity-check this.  My partitions:

	=>       63  124735425  da0  MBR  (59G)
	         63       1985       - free -  (993K)
	       2048     104448    1  fat32lba  [active]  (51M)
	     106496  124624896    2  freebsd  (59G)
	  124731392       4096       - free -  (2.0M)
	
	=>        0  124624896  da0s2  BSD  (59G)
	          0   92274688      1  freebsd-zfs  (44G)
	   92274688   16777216      2  freebsd-ufs  (8.0G)
	  109051904   15572992      4  freebsd-swap  (7.4G)

I took the dump when I was creating it, now da0s1 -> mmcsd0s1.  That part works
since it'll boot into the kernel, and when that fails to boot into the -zfs
partition, I can boot into the -ufs partition at the mountroot> prompt.

I created the ZFS pool like this:

	zpool create -f -m none -o altroot=/mnt/dst zroot ${DISK}s2a
	zfs set compression=lz4 atime=off zroot

	zfs create -o mountpoint=none zroot/ROOT

	zfs create zroot/ROOT/default
	zfs set mountpoint=/ canmount=noauto zroot/ROOT/default
	zpool set bootfs=zroot/ROOT/default zroot

I've added zfs_load="YES" to /boot/loader.conf, and zfs_enable="YES" into the
/etc/rc.conf file.  I don't have a / mountpoint specified in /etc/fstab like
I do with the UFS root, but I don't have that on my amd64 ZFS system.

No errors detected during scrub:

	# zpool status -v
	  pool: zroot
	 state: ONLINE
	  scan: scrub repaired 0 in 0 days 00:00:47 with 0 errors on Wed Jul 25 21:21:28 2018
	config:
	
	        NAME         STATE     READ WRITE CKSUM
	        zroot        ONLINE       0     0     0
	          mmcsd0s2a  ONLINE       0     0     0
	
	errors: No known data errors

Things look good to me:

	# zpool list
	NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
	zroot  43.5G   809M  42.7G        -         -     1%     1%  1.00x  ONLINE  -

	# zfs list -tall
	NAME                 USED  AVAIL  REFER  MOUNTPOINT
	zroot                809M  41.4G    23K  /zroot
	zroot/ROOT           809M  41.4G    23K  none
	zroot/ROOT/default   809M  41.4G   809M  /

I only added zfs_load="YES" to /boot/loader.conf (which is all I did for amd64)
but I've noticed in some old recipes that opensolaris is also added, and I see
it now, as a dependency I'm guessing:

	# kldstat
	Id Refs Address                Size Name
	 1   25 0xffff000000000000  13eba80 kernel
	 3    1 0xffff000054400000   2c2000 zfs.ko
	 4    1 0xffff0000546c2000    47000 opensolaris.ko
	 5    1 0xffff000054709000    41000 tmpfs.ko
	 6    1 0xffff00005474a000    41000 if_muge.ko
	 7    1 0xffff00005478b000    41000 ums.ko
	 8    1 0xffff000054a00000    41000 uhid.ko




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20180726043937.GC75644>