Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 15 Oct 2023 09:43:23 -0700
From:      Alan Somers <asomers@freebsd.org>
To:        freebsd-hackers@freebsd.org
Subject:   Re: zpool geli encryption question
Message-ID:  <CAOtMX2i%2B5x3_FmX75B83FRhDgY9LW5PV3KTCpnZbpLZ6tfWecg@mail.gmail.com>
In-Reply-To: <ZSwVurVf27qZvmTd@int21h>
References:  <ZSvrhL3IV4642-n5@int21h> <CAOtMX2jAs9%2BM79N-4LC9EdZ3y4jbvhWEWCs9KFxL9r=6zt%2BwZw@mail.gmail.com> <ZSwVurVf27qZvmTd@int21h>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Oct 15, 2023 at 9:39=E2=80=AFAM void <void@f-m.fm> wrote:
>
> On Sun, Oct 15, 2023 at 07:17:57AM -0700, Alan Somers wrote:
>
> >How much of the FreeBSD VM's disk is actually in-use?
>
> (in the example below, another vm instance, same observation)
>
> from the host:
>
> NAME                   USED    AVAIL REFER  MOUNTPOINT
> ssdzfs/fbsd140Rv1       97.5G   309G  21.3G  -
>
> within the booted vm:
>
> NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCH=
ILD
> zroot               74.8G  9.97G        0B     96K             0B      9.=
97G
> zroot/ROOT          74.8G  4.61G        0B     96K             0B      4.=
61G
> zroot/ROOT/default  74.8G  4.61G        0B   4.61G             0B        =
 0B
> zroot/home          74.8G  59.6M        0B   59.6M             0B        =
 0B
> zroot/tmp           74.8G   120K        0B    120K             0B        =
 0B
> zroot/usr           74.8G  5.28G        0B     96K             0B      5.=
28G
> zroot/usr/ports     74.8G  5.28G        0B   5.28G             0B        =
 0B
> zroot/usr/src       74.8G    96K        0B     96K             0B        =
 0B
> zroot/var           74.8G  1.17M        0B     96K             0B      1.=
08M
> zroot/var/audit     74.8G    96K        0B     96K             0B        =
 0B
> zroot/var/crash     74.8G    96K        0B     96K             0B        =
 0B
> zroot/var/log       74.8G   564K        0B    564K             0B        =
 0B
> zroot/var/mail      74.8G   252K        0B    252K             0B        =
 0B
> zroot/var/tmp       74.8G    96K        0B     96K             0B        =
 0B
>
> gzipped archive:
>
> -rw-r--r--   1 root wheel   21G 15 Oct 16:39 2023.10.15_15:57.fbsd140Rv1.=
gz
>
> >Maybe you are using TRIM with FreeBSD, which punches holes in the host's=
 ZFS
> >storage.
>
> On the bhyve host (14.0-BETA3 #0 releng/14.0-n265111)
>
> vfs.zfs.vdev.trim_min_active: 1
> vfs.zfs.vdev.trim_max_active: 2
> vfs.zfs.trim.queue_limit: 10
> vfs.zfs.trim.txg_batch: 32
> vfs.zfs.trim.metaslab_skip: 0
> vfs.zfs.trim.extent_bytes_min: 32768
> vfs.zfs.trim.extent_bytes_max: 134217728
> vfs.zfs.l2arc.trim_ahead: 0
> vfs.ffs.dotrimcons: 1
>
> Does this mean trim is enabled and active on the host?
> I didn't set it. Maybe it was automatically set because zfs knows the
> hardware is SSD?

Within the VM, do "zpool get autotrim zroot" to see if it's set.  You
can also manually trim with "zpool trim zroot" if you don't want to
use the autotrim setting.

Note that even without trim, it's possible that there are LBAs which
the VM simply has never written to.  That could also explain the low
space usage on the host.

>
> > That would explain why compression seems to save space, even
> > though the data is encrypted.
>
> That's really smart.
>
> TYVM for the explainer.
> --
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2i%2B5x3_FmX75B83FRhDgY9LW5PV3KTCpnZbpLZ6tfWecg>