Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Oct 2019 08:10:43 -0700
From:      John Kennedy <warlock@phouka.net>
To:        ro Ze <testcb00@gmail.com>
Cc:        freebsd-arm@freebsd.org
Subject:   Re: Is it possible to build a ZFS Raspberry Pi 3B/3B+/4 image? with other questions.
Message-ID:  <20191028151043.GA2664@phouka1.phouka.net>
In-Reply-To: <CAL4V3=BkWafOLhUazHQVePbF%2BiY-ovC9J7X_5H0DNOf_9e6W2Q@mail.gmail.com>
References:  <CAL4V3=BkWafOLhUazHQVePbF%2BiY-ovC9J7X_5H0DNOf_9e6W2Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Oct 26, 2019 at 07:58:37PM +0900, ro Ze wrote:
> Hi everyone, it is my first FreeBSD post. I am using FreeBSD 12.1-RC2 RPI3
> image in my Raspberry Pi 3B+. ...
> 
> I have a few experience on FreeNAS and I know the two system are not the
> same. On FreeNAS, there is a tool named iocage which is a FreeBSD Jail
> manager. Using the iocage, I am allowed to pass a dedicated network
> interface to the Jail via the vnet feature which is default supported in
> FreeBSD 12.0. With that feature, I have a idea of using FreeBSD Jail in my
> Raspberry Pi and I start to do this.

  FYI, the only jails I use on my RPI is for poudriere.  It's a very memory
constrained platform, obviously.

> Before flashing the image to SD card, I had downloaded x86 DVD and
> installed as a VM to learn the basic command in FreeBSD. Since the
> installation process allowed me to use ZFS as the root file system, I only
> had few trouble in installing the iocage and using the Jail with vnet.
> However, I found that the root file system of FreeBSD RPI3 image is UFS,
> which is not supported by iocage. This means that I will have to use
> another storage device to create a new zpool for iocage.
> 
> Since Raspberry Pi doesn't have native SATA port and all it's USB ports are
> come from USB Hub, I believe that using iocage inside the USB will have a
> huge performance drop. However, to try the feature, I adopt this situation,
> create a USB zpool for iocage. I have build a zpool for Jail storage in
> another USB stick in my FreeNAS and I am now import the pool to my
> Raspberry Pi, and mount to the Jail.

  I've het to find a good enough SD card that I thought it IO-overpowered my
USB-attached drive, probably made worse by throwing ZFS on top of it with
my workloads (I'm specifically thinking about available RAM for disk cache).

> Question part.
> So, is there a way to build a ZFS image (zroot) so that I could use iocage
> natively in my SD card? ...

  So... yes.  Paraphrasing from my original notes when 12.0 was CURRENT which
was apparently 2018/8/2 so you'll have to adapt.  In short, I took the image
FreeBSD distributed, repartitioned with ZFS and copied their files into it.
It may be enough to keep iocage happy.

    o	Grab the image to base your stuff on
    o	Mount it via mdconfig
		mdconfig -a -t vnode -o readonly -f something.img -u 0
    o	Get your new SD card somewhere where you can write to it
    	(I had a little USB-adapter)

    o	Nuke all partitions on your new disk; set up partitioning
		gpart create -s MBR ${DISK}
    o	Build the first partition to keep uboot happy.  Note that
	I align my 2nd partition to 4M to try and optimize things.
	That \!12 is a funky way to specify the fat32lba type that
	I had to use at that point in time:
		gpart add -t freebsd -b51m -a4m -i2 ${DISK}
		gpart add -t \!12 -a4m -i1 ${DISK}
		gpart set -i1 -a active ${DISK}

	So now my disk looked something like this:

		# gpart show $DISK
		=>       63  124735425  da0  MBR  (59G)
		         63       8129       - free -  (4.0M)
		       8192      98304    1  fat32lba  [active]  (48M)
		     106496  124624896    2  freebsd  (59G)
		  124731392       4096       - free -  (2.0M)

    o	Give us a filesystem for uboot:
		newfs_msdos -F16 -L uboot /dev/${DISK}s1

    o	Set up our mountpoints for copying data:
	(md0s1 should be the uboot dos partition from the image)
		mkdir -p /mnt/src /mnt/dst /mnt/zfs
		mount -v -t msdosfs -o ro /dev/md0s1 /mnt/src
		mount -v -t msdosfs -o rw /dev/${DISK}s1 /mnt/dst

		sh -c "cd /mnt/src && tar cf - *" | sh -c "cd /mnt/dst && tar fvx -"

		umount -v /mnt/src && umount -v /mnt/dst

    o	Now work on the BSD.  As I recall here, I needed the UFS partition
    	here for /boot since uboot couldn't handle ZFS directly.  I can't
	say that I'm too impressed with my layout.  In real life, I'd probably
	want to make the ZFS partition last since it's the most likely to
	want to get expanded.  IMHO, you REALLY, REALLY want a native swap
	partition that's decently (if not max) sized:

		gpart create -s BSD ${DISK}s2
		gpart add -t freebsd-zfs  -a4m -s 40g ${DISK}s2
		gpart add -t freebsd-ufs  -a4m -s 12g ${DISK}s2
		gpart add -t freebsd-swap -a4m        ${DISK}s2

	So the layout looked like this afterwards:

		# gpart show ${DISK}s2
		=>        0  124624896  da0s2  BSD  (59G)
		          0   83886080      1  freebsd-zfs  (40G)
		   83886080   25165824      2  freebsd-ufs  (12G)
		  109051904   15572992      4  freebsd-swap  (7.4G)

    o	Rebuild UFS partition, copy data in:
		newfs -L uroot /dev/${DISK}s2b

		mount -v -t ufs -o ro /dev/md0s2a /mnt/src
		mount -v -t ufs -o rw /dev/${DISK}s2b /mnt/dst

		sh -c "cd /mnt/src && tar cf - ." | sh -c "cd /mnt/dst && tar fvx -"

    o	Rebuild ZFS parition:
		zpool create -f -m none -o altroot=/mnt/zfs ${POOL} ${DISK}s2a
		zfs set compression=lz4 atime=off ${POOL}

		zfs create -o mountpoint=none ${POOL}/ROOT

		zfs create ${POOL}/ROOT/default
		zfs set mountpoint=/ canmount=noauto ${POOL}/ROOT/default
		zpool set bootfs=${POOL}/ROOT/default ${POOL}

    o	Create ZFS file system hierarchy.  I think I based this a bit on an
	old wiki article, partly my own preference (specifically obj & ports):
	(https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot)

		zpool create -o altroot=/mnt zroot ada0p3

		zfs set compress=on                                            zroot

		zfs create -o mountpoint=none                                  zroot/ROOT
		zfs create -o mountpoint=/ -o canmount=noauto                  zroot/ROOT/default
		mount -t zfs zroot/ROOT/default /mnt

		zfs create -o mountpoint=/tmp  -o exec=on      -o setuid=off   zroot/tmp
		zfs create -o canmount=off -o mountpoint=/usr                  zroot/usr
		zfs create                                                     zroot/usr/home
		zfs create                     -o exec=on      -o setuid=off   zroot/usr/src
		zfs create                                                     zroot/usr/obj
		zfs create -o mountpoint=/usr/ports            -o setuid=off   zroot/usr/ports
		zfs create                     -o exec=off     -o setuid=off   zroot/usr/ports/distfiles
		zfs create                     -o exec=off     -o setuid=off   zroot/usr/ports/packages
		zfs create -o canmount=off -o mountpoint=/var                  zroot/var
		zfs create                     -o exec=off     -o setuid=off   zroot/var/audit
		zfs create                     -o exec=off     -o setuid=off   zroot/var/crash
		zfs create                     -o exec=off     -o setuid=off   zroot/var/log
		zfs create -o atime=on         -o exec=off     -o setuid=off   zroot/var/mail
		zfs create                     -o exec=on      -o setuid=off   zroot/var/tmp

    o	Copy the files in....
		mount -vt zfs ${POOL}/ROOT/default /mnt/zfs

		zfs set mountpoint=/${POOL} ${POOL}

		sh -c "cd /mnt/src && tar cf - ." | sh -c "cd /mnt/zfs && tar fvx -"


... and customize.  Make sure the various files have some ZFS things enabled:

  Make sure ZFS is enabled, don't allow more automatic growth:

	/etc/rc.conf:
		zfs_enable="YES"
		# growfs_enable="YES"


	/boot/loader.conf
		hw.usb.template=3
		umodem_load="YES"

		opensolaris_load="YES"
		zfs_load="YES"

  Don't remember why I had the USB template.  I think umodem was RPI specific.
Those might have come from the original distribution and I kept them in.  The
ZFS stuff was obvious (I think the opensolaris may not be needed now, but at
the time it was a dependency that didn't auto-load).

	/etc/fstab
		# Device		Mountpoint	FStype	Options			Dump	Pass#
		/dev/ufs/uroot		/		ufs	rw			1	1
		/dev/msdosfs/UBOOT	/boot/msdos	msdosfs	rw,noatime		0	0
		tmpfs			/tmp		tmpfs	rw,mode=1777,size=50m	0	0

		#/dev/mmcsd0s2b		none		swap	sw			0	0

  The ZFS should auto-mount in the positions we set up when we created it.  Not sure
why I commented-out the swap here.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20191028151043.GA2664>