Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Jan 2022 09:12:50 -0500
From:      Rich <rincebrain@gmail.com>
To:        Florent Rivoire <florent@rivoire.fr>
Cc:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: [zfs] recordsize: unexpected increase of disk usage when increasing it
Message-ID:  <CAOeNLuopaY3j7P030KO4LMwU3BOU5tXiu6gRsSKsDrFEuGKuaA@mail.gmail.com>
In-Reply-To: <CADzRhsEsZMGE-SoeWLMG9NTtkwhhy6OGQQ046m9AxGFbp5h_kQ@mail.gmail.com>
References:  <CADzRhsEsZMGE-SoeWLMG9NTtkwhhy6OGQQ046m9AxGFbp5h_kQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--000000000000d3b3f605d5dbdd98
Content-Type: text/plain; charset="UTF-8"

Compression would have made your life better here, and possibly also made
it clearer what's going on.

All records in a file are going to be the same size pre-compression - so if
you set the recordsize to 1M and save a 131.1M file, it's going to take up
132M on disk before compression/raidz overhead/whatnot.

Usually compression saves you from the tail padding actually requiring
allocation on disk, which is one reason I encourage everyone to at least
use lz4 (or, if you absolutely cannot for some reason, I guess zle should
also work for this one case...)

But I would say it's probably the sum of last record padding across the
whole dataset, if you don't have compression on.

- Rich

On Tue, Jan 18, 2022 at 8:57 AM Florent Rivoire <florent@rivoire.fr> wrote:

> TLDR: I rsync-ed the same data twice: once with 128K recordsize and
> once with 1M, and the allocated size on disk is ~3% bigger with 1M.
> Why not smaller ?
>
>
> Hello,
>
> I would like some help to understand how the disk usage evolves when I
> change the recordsize.
>
> I've read several articles/presentations/forums about recordsize in
> ZFS, and if I try to summarize, I mainly understood that:
> - recordsize is the "maximum" size of "objects" (so "logical blocks")
> that zfs will create for both  -data & metadata, then each object is
> compressed and allocated to one vdev, splitted into smaller (ashift
> size) "physical" blocks and written on disks
> - increasing recordsize is usually good when storing large files that
> are not modified, because it limits the nb of metadata objects
> (block-pointers), which has a positive effect on performance
> - decreasing recordsize is useful for "databases-like" workloads (ie:
> small random writes inside existing objects), because it avoids write
> amplification (read-modify-write a large object for a small update)
>
> Today, I'm trying to observe the effect of increasing recordsize for
> *my* data (because I'm also considering defining special_small_blocks
> & using SSDs as "special", but not tested nor discussed here, just
> recordsize).
> So, I'm doing some benchmarks on my "documents" dataset (details in
> "notes" below), but the results are really strange to me.
>
> When I rsync the same data to a freshly-recreated zpool:
> A) with recordsize=128K : 226G allocated on disk
> B) with recordsize=1M : 232G allocated on disk => bigger than 128K ?!?
>
> I would clearly expect the other way around, because bigger recordsize
> generates less metadata so smaller disk usage, and there shouldn't be
> any overhead because 1M is just a maximum and not a forced size to
> allocate for every object.
> I don't mind the increased usage (I can live with a few GB more), but
> I would like to understand why it happens.
>
> I tried to give all the details of my tests below.
> Did I do something wrong ? Can you explain the increase ?
>
> Thanks !
>
>
>
> ===============================================
> A) 128K
> ==========
>
> # zpool destroy bench
> # zpool create -o ashift=12 bench
> /dev/gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4
>
> # rsync -av --exclude '.zfs' /mnt/tank/docs-florent/ /bench
> [...]
> sent 241,042,476,154 bytes  received 353,838 bytes  81,806,492.45 bytes/sec
> total size is 240,982,439,038  speedup is 1.00
>
> # zfs get recordsize bench
> NAME   PROPERTY    VALUE    SOURCE
> bench  recordsize  128K     default
>
> # zpool list -v bench
> NAME                                           SIZE  ALLOC   FREE
> CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
> bench                                         2.72T   226G  2.50T
>   -         -     0%     8%  1.00x    ONLINE  -
>   gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4  2.72T   226G  2.50T
>   -         -     0%  8.10%      -    ONLINE
>
> # zfs list bench
> NAME    USED  AVAIL     REFER  MOUNTPOINT
> bench   226G  2.41T      226G  /bench
>
> # zfs get all bench |egrep "(used|referenced|written)"
> bench  used                  226G                   -
> bench  referenced            226G                   -
> bench  usedbysnapshots       0B                     -
> bench  usedbydataset         226G                   -
> bench  usedbychildren        1.80M                  -
> bench  usedbyrefreservation  0B                     -
> bench  written               226G                   -
> bench  logicalused           226G                   -
> bench  logicalreferenced     226G                   -
>
> # zdb -Lbbbs bench > zpool-bench-rcd128K.zdb
>
>
>
> ===============================================
> B) 1M
> ==========
>
> # zpool destroy bench
> # zpool create -o ashift=12 bench
> /dev/gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4
> # zfs set recordsize=1M bench
>
> # rsync -av --exclude '.zfs' /mnt/tank/docs-florent/ /bench
> [...]
> sent 241,042,476,154 bytes  received 353,830 bytes  80,173,899.88 bytes/sec
> total size is 240,982,439,038  speedup is 1.00
>
> # zfs get recordsize bench
> NAME   PROPERTY    VALUE    SOURCE
> bench  recordsize  1M       local
>
> # zpool list -v bench
> NAME                                           SIZE  ALLOC   FREE
> CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
> bench                                         2.72T   232G  2.49T
>   -         -     0%     8%  1.00x    ONLINE  -
>   gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4  2.72T   232G  2.49T
>   -         -     0%  8.32%      -    ONLINE
>
> # zfs list bench
> NAME    USED  AVAIL     REFER  MOUNTPOINT
> bench   232G  2.41T      232G  /bench
>
> # zfs get all bench |egrep "(used|referenced|written)"
> bench  used                  232G                   -
> bench  referenced            232G                   -
> bench  usedbysnapshots       0B                     -
> bench  usedbydataset         232G                   -
> bench  usedbychildren        1.96M                  -
> bench  usedbyrefreservation  0B                     -
> bench  written               232G                   -
> bench  logicalused           232G                   -
> bench  logicalreferenced     232G                   -
>
> # zdb -Lbbbs bench > zpool-bench-rcd1M.zdb
>
>
>
> ===============================================
> Notes:
> ==========
>
> - the source dataset contains ~50% of pictures (raw files and jpg),
> and also some music, various archived documents, zip, videos
> - no change on the source dataset while testing (cf size logged by resync)
> - I repeated the tests twice (128K, then 1M, then 128K, then 1M), and
> same results
> - probably not important here, but:
> /dev/gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4 is a Red 3TB CMR
> (WD30EFRX), and /mnt/tank/docs-florent/ is a 128K-recordsize dataset
> on another zpool that I never tweaked except ashit=12 (because using
> the same model of Red 3TB)
>
> # zfs --version
> zfs-2.0.6-1
> zfs-kmod-v2021120100-zfs_a8c7652
>
> # uname -a
> FreeBSD xxxxxxxxx 12.2-RELEASE-p11 FreeBSD 12.2-RELEASE-p11
> 75566f060d4(HEAD) TRUENAS  amd64
>

--000000000000d3b3f605d5dbdd98
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Compression would have made your life better here, and pos=
sibly also made it clearer what&#39;s going on.<div><br></div><div>All reco=
rds in a file are going to be the same size pre-compression - so if you set=
 the recordsize to 1M and save a 131.1M file, it&#39;s going to take up 132=
M on disk before compression/raidz overhead/whatnot.</div><div><br></div><d=
iv>Usually compression saves you from the tail padding actually requiring a=
llocation on disk, which is one reason I encourage everyone to at least use=
 lz4 (or, if you absolutely cannot for some reason, I guess zle should also=
 work for this one case...)</div><div><br></div><div>But I would say it&#39=
;s probably the sum of last record padding across the whole dataset, if you=
 don&#39;t have compression on.</div><div><br></div><div>- Rich</div></div>=
<br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Tue=
, Jan 18, 2022 at 8:57 AM Florent Rivoire &lt;<a href=3D"mailto:florent@riv=
oire.fr">florent@rivoire.fr</a>&gt; wrote:<br></div><blockquote class=3D"gm=
ail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,=
204,204);padding-left:1ex">TLDR: I rsync-ed the same data twice: once with =
128K recordsize and<br>
once with 1M, and the allocated size on disk is ~3% bigger with 1M.<br>
Why not smaller ?<br>
<br>
<br>
Hello,<br>
<br>
I would like some help to understand how the disk usage evolves when I<br>
change the recordsize.<br>
<br>
I&#39;ve read several articles/presentations/forums about recordsize in<br>
ZFS, and if I try to summarize, I mainly understood that:<br>
- recordsize is the &quot;maximum&quot; size of &quot;objects&quot; (so &qu=
ot;logical blocks&quot;)<br>
that zfs will create for both=C2=A0 -data &amp; metadata, then each object =
is<br>
compressed and allocated to one vdev, splitted into smaller (ashift<br>
size) &quot;physical&quot; blocks and written on disks<br>
- increasing recordsize is usually good when storing large files that<br>
are not modified, because it limits the nb of metadata objects<br>
(block-pointers), which has a positive effect on performance<br>
- decreasing recordsize is useful for &quot;databases-like&quot; workloads =
(ie:<br>
small random writes inside existing objects), because it avoids write<br>
amplification (read-modify-write a large object for a small update)<br>
<br>
Today, I&#39;m trying to observe the effect of increasing recordsize for<br=
>
*my* data (because I&#39;m also considering defining special_small_blocks<b=
r>
&amp; using SSDs as &quot;special&quot;, but not tested nor discussed here,=
 just<br>
recordsize).<br>
So, I&#39;m doing some benchmarks on my &quot;documents&quot; dataset (deta=
ils in<br>
&quot;notes&quot; below), but the results are really strange to me.<br>
<br>
When I rsync the same data to a freshly-recreated zpool:<br>
A) with recordsize=3D128K : 226G allocated on disk<br>
B) with recordsize=3D1M : 232G allocated on disk =3D&gt; bigger than 128K ?=
!?<br>
<br>
I would clearly expect the other way around, because bigger recordsize<br>
generates less metadata so smaller disk usage, and there shouldn&#39;t be<b=
r>
any overhead because 1M is just a maximum and not a forced size to<br>
allocate for every object.<br>
I don&#39;t mind the increased usage (I can live with a few GB more), but<b=
r>
I would like to understand why it happens.<br>
<br>
I tried to give all the details of my tests below.<br>
Did I do something wrong ? Can you explain the increase ?<br>
<br>
Thanks !<br>
<br>
<br>
<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
A) 128K<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
# zpool destroy bench<br>
# zpool create -o ashift=3D12 bench<br>
/dev/gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4<br>
<br>
# rsync -av --exclude &#39;.zfs&#39; /mnt/tank/docs-florent/ /bench<br>
[...]<br>
sent 241,042,476,154 bytes=C2=A0 received 353,838 bytes=C2=A0 81,806,492.45=
 bytes/sec<br>
total size is 240,982,439,038=C2=A0 speedup is 1.00<br>
<br>
# zfs get recordsize bench<br>
NAME=C2=A0 =C2=A0PROPERTY=C2=A0 =C2=A0 VALUE=C2=A0 =C2=A0 SOURCE<br>
bench=C2=A0 recordsize=C2=A0 128K=C2=A0 =C2=A0 =C2=A0default<br>
<br>
# zpool list -v bench<br>
NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0SIZE=C2=A0 ALLOC=C2=A0 =C2=A0FREE<br>
CKPOINT=C2=A0 EXPANDSZ=C2=A0 =C2=A0FRAG=C2=A0 =C2=A0 CAP=C2=A0 DEDUP=C2=A0 =
=C2=A0 HEALTH=C2=A0 ALTROOT<br>
bench=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A02.72T=C2=A0 =C2=A0226G=C2=A0 2.50T<br>
=C2=A0 -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-=C2=A0 =C2=A0 =C2=A00%=C2=A0 =C2=
=A0 =C2=A08%=C2=A0 1.00x=C2=A0 =C2=A0 ONLINE=C2=A0 -<br>
=C2=A0 gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4=C2=A0 2.72T=C2=A0 =C2=A02=
26G=C2=A0 2.50T<br>
=C2=A0 -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-=C2=A0 =C2=A0 =C2=A00%=C2=A0 8.1=
0%=C2=A0 =C2=A0 =C2=A0 -=C2=A0 =C2=A0 ONLINE<br>
<br>
# zfs list bench<br>
NAME=C2=A0 =C2=A0 USED=C2=A0 AVAIL=C2=A0 =C2=A0 =C2=A0REFER=C2=A0 MOUNTPOIN=
T<br>
bench=C2=A0 =C2=A0226G=C2=A0 2.41T=C2=A0 =C2=A0 =C2=A0 226G=C2=A0 /bench<br=
>
<br>
# zfs get all bench |egrep &quot;(used|referenced|written)&quot;<br>
bench=C2=A0 used=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 226G=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0-<br>
bench=C2=A0 referenced=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 226G=C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 usedbysnapshots=C2=A0 =C2=A0 =C2=A0 =C2=A00B=C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 usedbydataset=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0226G=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 usedbychildren=C2=A0 =C2=A0 =C2=A0 =C2=A0 1.80M=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -<br>
bench=C2=A0 usedbyrefreservation=C2=A0 0B=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 written=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A02=
26G=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<b=
r>
bench=C2=A0 logicalused=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0226G=C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 logicalreferenced=C2=A0 =C2=A0 =C2=A0226G=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
<br>
# zdb -Lbbbs bench &gt; zpool-bench-rcd128K.zdb<br>
<br>
<br>
<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
B) 1M<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
# zpool destroy bench<br>
# zpool create -o ashift=3D12 bench<br>
/dev/gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4<br>
# zfs set recordsize=3D1M bench<br>
<br>
# rsync -av --exclude &#39;.zfs&#39; /mnt/tank/docs-florent/ /bench<br>
[...]<br>
sent 241,042,476,154 bytes=C2=A0 received 353,830 bytes=C2=A0 80,173,899.88=
 bytes/sec<br>
total size is 240,982,439,038=C2=A0 speedup is 1.00<br>
<br>
# zfs get recordsize bench<br>
NAME=C2=A0 =C2=A0PROPERTY=C2=A0 =C2=A0 VALUE=C2=A0 =C2=A0 SOURCE<br>
bench=C2=A0 recordsize=C2=A0 1M=C2=A0 =C2=A0 =C2=A0 =C2=A0local<br>
<br>
# zpool list -v bench<br>
NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0SIZE=C2=A0 ALLOC=C2=A0 =C2=A0FREE<br>
CKPOINT=C2=A0 EXPANDSZ=C2=A0 =C2=A0FRAG=C2=A0 =C2=A0 CAP=C2=A0 DEDUP=C2=A0 =
=C2=A0 HEALTH=C2=A0 ALTROOT<br>
bench=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A02.72T=C2=A0 =C2=A0232G=C2=A0 2.49T<br>
=C2=A0 -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-=C2=A0 =C2=A0 =C2=A00%=C2=A0 =C2=
=A0 =C2=A08%=C2=A0 1.00x=C2=A0 =C2=A0 ONLINE=C2=A0 -<br>
=C2=A0 gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4=C2=A0 2.72T=C2=A0 =C2=A02=
32G=C2=A0 2.49T<br>
=C2=A0 -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-=C2=A0 =C2=A0 =C2=A00%=C2=A0 8.3=
2%=C2=A0 =C2=A0 =C2=A0 -=C2=A0 =C2=A0 ONLINE<br>
<br>
# zfs list bench<br>
NAME=C2=A0 =C2=A0 USED=C2=A0 AVAIL=C2=A0 =C2=A0 =C2=A0REFER=C2=A0 MOUNTPOIN=
T<br>
bench=C2=A0 =C2=A0232G=C2=A0 2.41T=C2=A0 =C2=A0 =C2=A0 232G=C2=A0 /bench<br=
>
<br>
# zfs get all bench |egrep &quot;(used|referenced|written)&quot;<br>
bench=C2=A0 used=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 232G=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0-<br>
bench=C2=A0 referenced=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 232G=C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 usedbysnapshots=C2=A0 =C2=A0 =C2=A0 =C2=A00B=C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 usedbydataset=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0232G=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 usedbychildren=C2=A0 =C2=A0 =C2=A0 =C2=A0 1.96M=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -<br>
bench=C2=A0 usedbyrefreservation=C2=A0 0B=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 written=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A02=
32G=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<b=
r>
bench=C2=A0 logicalused=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0232G=C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
bench=C2=A0 logicalreferenced=C2=A0 =C2=A0 =C2=A0232G=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-<br>
<br>
# zdb -Lbbbs bench &gt; zpool-bench-rcd1M.zdb<br>
<br>
<br>
<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
Notes:<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
- the source dataset contains ~50% of pictures (raw files and jpg),<br>
and also some music, various archived documents, zip, videos<br>
- no change on the source dataset while testing (cf size logged by resync)<=
br>
- I repeated the tests twice (128K, then 1M, then 128K, then 1M), and<br>
same results<br>
- probably not important here, but:<br>
/dev/gptid/3c0f5cbc-b0ce-11ea-ab91-c8cbb8cc3ad4 is a Red 3TB CMR<br>
(WD30EFRX), and /mnt/tank/docs-florent/ is a 128K-recordsize dataset<br>
on another zpool that I never tweaked except ashit=3D12 (because using<br>
the same model of Red 3TB)<br>
<br>
# zfs --version<br>
zfs-2.0.6-1<br>
zfs-kmod-v2021120100-zfs_a8c7652<br>
<br>
# uname -a<br>
FreeBSD xxxxxxxxx 12.2-RELEASE-p11 FreeBSD 12.2-RELEASE-p11<br>
75566f060d4(HEAD) TRUENAS=C2=A0 amd64<br>
</blockquote></div>

--000000000000d3b3f605d5dbdd98--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOeNLuopaY3j7P030KO4LMwU3BOU5tXiu6gRsSKsDrFEuGKuaA>