Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 29 Apr 2020 17:44:44 +0200
From:      Maurizio Vairani <maurizio1018@gmail.com>
To:        Ryan Moeller <freqlabs@freebsd.org>
Cc:        freebsd-current <freebsd-current@freebsd.org>, freebsd-stable@freebsd.org
Subject:   Re: OpenZFS port updated
Message-ID:  <CAN0zgYXmSKN8G30dz8RWmyn4rz-bDG%2BxnOS_g=X1hJvj2HzJtQ@mail.gmail.com>
In-Reply-To: <A61E33DF-96D0-449D-8665-9089599F0583@FreeBSD.org>
References:  <A61E33DF-96D0-449D-8665-9089599F0583@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Il giorno ven 17 apr 2020 alle ore 20:36 Ryan Moeller <freqlabs@freebsd.org=
>
ha scritto:

> FreeBSD support has been merged into the master branch of the openzfs/zfs
> repository, and the FreeBSD ports have been switched to this branch.
>
> OpenZFS brings many exciting features to FreeBSD, including:
>  * native encryption
>  * improved TRIM implementation
>  * most recently, persistent L2ARC
>
> Of course, avoid upgrading your pools if you want to keep the option to g=
o
> back to the base ZFS.
>
> OpenZFS can be installed alongside the base ZFS. Change your loader.conf
> entry to openzfs_load=3D=E2=80=9CYES=E2=80=9D to load the OpenZFS module =
at boot, and set
> PATH to find the tools in /usr/local/sbin before /sbin. The base zfs tool=
s
> are still basically functional with the OpenZFS module, so changing PATH =
in
> rc is not strictly necessary.
>
> The FreeBSD loader can boot from pools with the encryption feature
> enabled, but the root/bootenv datasets must not be encrypted themselves.
>
> The FreeBSD platform support in OpenZFS does not yet include all features
> present in FreeBSD=E2=80=99s ZFS. Some notable changes/missing features i=
nclude:
>  * many sysctl names have changed (legacy compat sysctls should be added
> at some point)
>  * zfs send progress reporting in process title via setproctitle
>  * extended 'zfs holds -r' (
> https://svnweb.freebsd.org/base?view=3Drevision&revision=3D290015)
>  * vdev ashift optimizations (
> https://svnweb.freebsd.org/base?view=3Drevision&revision=3D254591)
>  * pre-mountroot zpool.cache loading (for automatic pool imports)
>
> To the last point, this mainly effects the case where / is on ZFS and
> /boot is not or is on a different pool. OpenZFS cannot handle this case
> yet, but work is in progress to cover that use case. Booting directly fro=
m
> ZFS does work.
>
> If there are pools that need to be imported at boot other than the boot
> pool, OpenZFS does not automatically import yet, and it uses
> /etc/zfs/zpool.cache rather than /boot/zfs/zpool.cache to keep track of
> imported pools.  To ensure all pool imports occur automatically, a simple
> edit to /etc/rc.d/zfs will suffice:
>
> diff --git a/libexec/rc/rc.d/zfs b/libexec/rc/rc.d/zfs
> index 2d35f9b5464..8e4aef0b1b3 100755
> --- a/libexec/rc/rc.d/zfs
> +++ b/libexec/rc/rc.d/zfs
> @@ -25,6 +25,13 @@ zfs_start_jail()
>
>  zfs_start_main()
>  {
> +       local cachefile
> +
> +       for cachefile in /boot/zfs/zpool.cache /etc/zfs/zpool.cache; do
> +               if [ -f $cachefile ]; then
> +                       zpool import -c $cachefile -a
> +               fi
> +       done
>         zfs mount -va
>         zfs share -a
>         if [ ! -r /etc/zfs/exports ]; then
>
> This will probably not be needed long-term. It is not necessary if the
> boot pool is the only pool.
>
> Happy testing :)
>
> - Ryan
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org=
"
>

On my laptop I am testing the new OpenZFS, I am running:

> uname -a

FreeBSD NomadBSD 12.1-RELEASE-p3 FreeBSD 12.1-RELEASE-p3 GENERIC  amd64

> freebsd-version -ku

12.1-RELEASE-p3

12.1-RELEASE-p4

I want let ZFS to write to the laptop SSD only every 1800 seconds:

> sudo zfs set sync=3Ddisabled zroot

and I have added these lines in /etc/sysctl.conf:

# Write to SSD every 30 minutes.

# 19/04/20 Added support for OpenZFS.

# Force commit Transaction Group (TXG) at 1800 secs, increase to aggregated

# more data (default 5 sec)

# vfs.zfs.txg.timeout for ZFS, vfs.zfs.txg_timeout for OpenZFS

vfs.zfs.txg.timeout=3D1800

vfs.zfs.txg_timeout=3D1800

# Write throttle when dirty "modified" data reaches 98% of dirty_data_max

#(default 60%)

vfs.zfs.delay_min_dirty_percent=3D98

# Force commit Transaction Group (TXG) if dirty_data reaches 95% of

# dirty_data_max (default 20%)

# vfs.zfs.dirty_data_sync_pct for ZFS, vfs.zfs.dirty_data_sync_percent for
OpenZFS

vfs.zfs.dirty_data_sync_pct=3D95

vfs.zfs.dirty_data_sync_percent=3D95

For testing the above settings I use the command: =E2=80=98zpool iostat -v =
-Td
zroot 600=E2=80=99 .

On the classic FreeBSD ZFS the output of the above command is similar to:

Tue Apr 28 14:44:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G    206     38  5.52M   360K

  diskid/DISK-185156448914p2  31.9G  61.1G    206     38  5.52M   360K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 14:54:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      8      0   297K      0

  diskid/DISK-185156448914p2  31.9G  61.1G      8      0   297K      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:04:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  14.4K      0

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  14.4K      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:14:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  2.89K  18.4K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  2.89K  18.4K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:24:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0    798      0

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0    798      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:34:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  2.43K      0

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  2.43K      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:44:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0    587  14.2K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0    587  14.2K

----------------------------  -----  -----  -----  -----  -----  -----

where the SSD is written every 1800 seconds.

On the new OpenZFS the output is:

Tue Apr 28 15:58:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G    203     24  5.18M   236K

  diskid/DISK-185156448914p2  31.9G  61.1G    203     24  5.18M   236K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:08:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      8      0   287K  9.52K

  diskid/DISK-185156448914p2  31.9G  61.1G      8      0   287K  9.52K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:18:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  15.6K  10.0K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  15.6K  10.0K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:28:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  3.07K  12.2K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  3.07K  12.2K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:38:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0    573  11.1K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0    573  11.1K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:48:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  1.96K  10.6K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  1.96K  10.6K

----------------------------  -----  -----  -----  -----  -----  -----

where the SSD is always written.

What I am missing ?

Thanks in advance.

--

Maurizio



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAN0zgYXmSKN8G30dz8RWmyn4rz-bDG%2BxnOS_g=X1hJvj2HzJtQ>