Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Oct 2023 17:23:49 +0200 (CEST)
From:      Ronald Klop <ronald-lists@klop.ws>
To:        void <void@f-m.fm>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: periodic daily takes a very long time to run (14-stable)
Message-ID:  <1210534753.8409.1698420229888@localhost>
In-Reply-To: <ZTvMODY-mcBImHZP@int21h>
References:  <ZTuNvVMW_XG3mZKU@int21h> <1122335317.4913.1698407124469@localhost> <ZTuyXPjddEPqh-bi@int21h> <794932758.6659.1698413675475@localhost> <ZTvMODY-mcBImHZP@int21h>

next in thread | previous in thread | raw e-mail | index | archive | help
------=_Part_8408_2001362475.1698420229883
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: quoted-printable

Van: void <void@f-m.fm>
Datum: vrijdag, 27 oktober 2023 16:42
Aan: freebsd-stable@freebsd.org
Onderwerp: Re: periodic daily takes a very long time to run (14-stable)
>=20
> On Fri, Oct 27, 2023 at 03:34:35PM +0200, Ronald Klop wrote:
> >
> >What stands out to me is that you do quite a lot of writes on the disk. =
(I might be mistaken.)
> >The max. number of IOPS for HDD is around 80 for consumer grade harddisk=
s. I think this counts for USB connected disks.
> >https://en.wikipedia.org/wiki/IOPS#Mechanical_hard_drives
> >From the stats you posted it looks like you are almost always doing 50+ =
writes/second already. That does not leave much IOPS for the find process.
> >
> >ZFS tries to bundle the writes every 5 seconds to leave room for the rea=
ds. See "sysctl vfs.zfs.txg.timeout". Unless it has too much data to write =
or a sync request comes in.
>=20
> % sysctl vfs.zfs.txg.timeout
> vfs.zfs.txg.timeout: 5
>=20
> do I need to tune this?
>=20
> Here's equivalent output from my setup (I ran periodic daily again)
>=20
> #device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen =
 %b
> da0           16      18    191.9    557.9    50     8   144    29   10  =
24 da0          107       0    699.7      0.0    52     0     0    52    1 =
 99 da0          102       0    409.2      0.0    71     0     0    71    2=
  98 da0           65       6    259.6     49.4   101   143     0   105   1=
2 101 da0           57      14    227.7    123.9   153   163     0   155   =
12 100 da0           40      19    158.8    285.8   205   103     0   172  =
 12  98 da0           46      30    191.1    441.9   180    58     0   132 =
  11  91 da0           63       4    261.6     16.1   162   250   239   170=
    6 112 da0           67      10    273.7     83.6    99    66     0    9=
5   12  91 da0           32      21    129.4    177.9   223   102     0   1=
75    5  97 da0           48      16    191.9    261.3   173   130     0   =
162    9 109 da0           38      19    152.2    191.3   168    61   292  =
 139    8 104 da0           92       0    366.9      0.0   104     0     0 =
  104    4 100 da0           73      10    291.7     87.9    76    99     0=
    79   12  97 da0           49      15    195.2    270.9   156   129     =
0   150   11 103 da0           53      15    212.3    248.3   139   128    =
 0   137   12  92 da0           54      22    216.1    272.1   151    81   =
 92   130    8 107 da0           80       4    320.9     16.0    74   201  =
 125    80    3 100 da0           55      10    218.8     72.9    89    73 =
    0    87   11  82 ^C
>=20
> % zpool iostat 1
> capacity     operations     bandwidth
> pool        alloc   free   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> zroot       93.6G   818G     13     16   161K   506K
> zroot       93.6G   818G     91      0   367K      0
> zroot       93.6G   818G    113      0   454K      0
> zroot       93.6G   818G    102      0   411K      0
> zroot       93.6G   818G     98      0   422K      0
> zroot       93.6G   818G     67     18   271K   171K
> zroot       93.6G   818G     43     16   173K   252K
> zroot       93.6G   818G     43     28   173K   376K
> zroot       93.6G   818G     78      3   315K  15.9K
> zroot       93.6G   818G     94      0   378K      0
> zroot       93.6G   818G    103      0   414K      0
> zroot       93.6G   818G    102      0   658K      0
> zroot       93.6G   818G     98      0   396K      0
> zroot       93.6G   818G    109      0   438K      0
> zroot       93.6G   818G    101      0   404K      0
> zroot       93.6G   818G     47     13   191K  91.4K
> zroot       93.6G   818G     52     11   209K   126K
> zroot       93.6G   818G     50     20   202K   301K
> zroot       93.6G   818G     46     12   186K   128K
> zroot       93.6G   818G     86      0   346K  3.93K
> zroot       93.6G   818G     45     18   183K   172K
> zroot       93.6G   818G     42     15   172K   343K
> zroot       93.6G   818G     43     24   173K   211K
> zroot       93.6G   818G     87      0   596K      0
> ^C
>=20
> >So if my observation is right it might be interesting to find out what i=
s writing.
> would ktrace and/or truss be useful? something else? The truss -p output =
of the
> find PID produces massive amounts of output, all like this:
>=20
> fstatat(AT_FDCWD,"5e70d5f895ccc92af6a7d5226f818b-81464.o",{ mode=3D-rw-r-=
-r-- ,inode=3D367004,size=3D10312,blksize=3D10752 },AT_SYMLINK_NOFOLLOW) =
=3D 0 (0x0)
>=20
> with the filename changing each time
>=20
> (later...)
>=20
> that file is in ccache!!!
>=20
> locate 5e70d5f895ccc92af6a7d5226f818b-81464.o
> /var/cache/ccache/f/5/5e70d5f895ccc92af6a7d5226f818b-81464.o
>=20
> maybe if I can exclude that dir (and /usr/obj) it'll lessen the periodic =
runtime.
> But i don't know yet whats calling find(1) when periodic daily runs. If I=
 can, I might be able to tell it not to walk certain heirarchies.
>=20
> >I had similar issues after the number of jails on my RPI4 increased and =
they all >were doing a little bit of writing which accumulated in quite a l=
ot of writing.
>=20
> I'm at a loss as to what's doing the writing. The system runs the followi=
ng:
>=20
> poudriere-devel # for aarch64 and armv7
> apcupsd         # for ups monitoring
> vnstat          # bandwidth use, writes to its db in /var/db/vnstat
> sshd
> exim (local)
> pflogd          # right now it's behind a firewall, on NAT so it's not do=
ing much
> pf              # same
> ntpd
> powerd
> nginx          # this serves the poudriere web frontend, and that's it (h=
ttp-only) syslogd
>=20
> >My solution was to add an SSD.
>=20
> I have an nfs alternative. The LAN is 1GB. But I think the fix will be to=
 tell find
> to not search some paths. just need to work out how to do it.
>=20
> What would the effect be of increasing or decreasing the txg delta with s=
ystem
> performance?
> --=20
> >
> =20
>=20
>=20
>=20




Well. You could remove daily_clean_disks_enable=3D"YES" from /etc/periodic.=
conf. That saves you the "find". I have never used it before. The default i=
s "off".

$ grep clean_disks /etc/defaults/periodic.conf =20
daily_clean_disks_enable=3D"NO"                # Delete files daily
daily_clean_disks_files=3D"[#,]* .#* a.out *.core *.CKP .emacs_[0-9]*"
daily_clean_disks_days=3D3                # If older than this

The list of files it checks for doesn't look very useful to me in 2023. Thi=
s does do a full find over *all* directories and files. *every day* ???
If you have a lot of *.core files you are better of putting this in sysctl.=
conf: kern.corefile=3D/var/tmp/%U.%N.%I.%P.core . So you know where to look=
 to delete them.
Actually my RPI3 has this in cron: @daily  find /var/tmp/ -name "*.core" -m=
time +7 -ls -delete .

About the vfs.zfs.txg.timeout. Don't mess with it if you don't have a solid=
 reason. It used to be 30 seconds when ZFS was new if I remember correctly.=
 But it could give too big write bursts which paused the system noticeably.=
 This is all from memory. I might be totally wrong here. :-)

The stats of your system look pretty reasonable. Also the zpool iostat 1 in=
dicates periods of no writes. So that is ok. You just have a lot of IOPS fo=
r 1 spinning disk when everything runs together. Poudriere & ccache indicat=
e that you are using it for compiling pkgs or other stuff. That is pretty h=
eavy for you setup if you manage to run things in parallel as the RPI4 has =
4CPUs. It doesn't help to run daily_cleanup together. ;-)

Regards,
Ronald.


=20
------=_Part_8408_2001362475.1698420229883
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: quoted-printable

<html><head></head><body><br>
<p><strong>Van:</strong> void &lt;void@f-m.fm&gt;<br>
<strong>Datum:</strong> vrijdag, 27 oktober 2023 16:42<br>
<strong>Aan:</strong> freebsd-stable@freebsd.org<br>
<strong>Onderwerp:</strong> Re: periodic daily takes a very long time to ru=
n (14-stable)</p>

<blockquote style=3D"padding-right: 0px; padding-left: 5px; margin-left: 5p=
x; border-left: #000000 2px solid; margin-right: 0px">
<div class=3D"MessageRFC822Viewer" id=3D"P">
<div class=3D"TextPlainViewer" id=3D"P.P">On Fri, Oct 27, 2023 at 03:34:35P=
M +0200, Ronald Klop wrote:<br>
&gt;<br>
&gt;What stands out to me is that you do quite a lot of writes on the disk.=
 (I might be mistaken.)<br>
&gt;The max. number of IOPS for HDD is around 80 for consumer grade harddis=
ks. I think this counts for USB connected disks.<br>
&gt;<a href=3D"https://en.wikipedia.org/wiki/IOPS#Mechanical_hard_drives">h=
ttps://en.wikipedia.org/wiki/IOPS#Mechanical_hard_drives</a><br>
&gt;From the stats you posted it looks like you are almost always doing 50+=
 writes/second already. That does not leave much IOPS for the find process.=
<br>
&gt;<br>
&gt;ZFS tries to bundle the writes every 5 seconds to leave room for the re=
ads. See "sysctl vfs.zfs.txg.timeout". Unless it has too much data to write=
 or a sync request comes in.<br>
<br>
% sysctl vfs.zfs.txg.timeout<br>
vfs.zfs.txg.timeout: 5<br>
<br>
do I need to tune this?<br>
<br>
Here's equivalent output from my setup (I ran periodic daily again)<br>
<br>
#device &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;r/s &nbsp;&nbsp;&nbsp;&nbsp;w/s=
 &nbsp;&nbsp;&nbsp;&nbsp;kr/s &nbsp;&nbsp;&nbsp;&nbsp;kw/s &nbsp;ms/r &nbsp=
;ms/w &nbsp;ms/o &nbsp;ms/t qlen &nbsp;%b<br>
da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;16 &nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;18 &nbsp;&nbsp;&nbsp;191.9 &nbsp;&nbsp;&nbsp;557.9 &n=
bsp;&nbsp;&nbsp;50 &nbsp;&nbsp;&nbsp;&nbsp;8 &nbsp;&nbsp;144 &nbsp;&nbsp;&n=
bsp;29 &nbsp;&nbsp;10 &nbsp;24 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;107 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;6=
99.7 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;52 &nbsp;&nbsp;&nb=
sp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;52 &nbsp;&nbsp;&nbsp=
;1 &nbsp;99 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;102 &=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;409.2 &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;71 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&n=
bsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;71 &nbsp;&nbsp;&nbsp;2 &nbsp;98 da0 &nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;65 &nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;6 &nbsp;&nbsp;&nbsp;259.6 &nbsp;&nbsp;&nbsp;&nbsp;49.4=
 &nbsp;&nbsp;101 &nbsp;&nbsp;143 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;105 =
&nbsp;&nbsp;12 101 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;57 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;14 &nbsp;&nbsp;&nbsp;227.7 &nbsp;&=
nbsp;&nbsp;123.9 &nbsp;&nbsp;153 &nbsp;&nbsp;163 &nbsp;&nbsp;&nbsp;&nbsp;0 =
&nbsp;&nbsp;155 &nbsp;&nbsp;12 100 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;40 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;19 &nbsp;&nbsp;&nb=
sp;158.8 &nbsp;&nbsp;&nbsp;285.8 &nbsp;&nbsp;205 &nbsp;&nbsp;103 &nbsp;&nbs=
p;&nbsp;&nbsp;0 &nbsp;&nbsp;172 &nbsp;&nbsp;12 &nbsp;98 da0 &nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;46 &nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;30 &nbsp;&nbsp;&nbsp;191.1 &nbsp;&nbsp;&nbsp;441.9 &nbsp;&nbsp;180 &nbsp=
;&nbsp;&nbsp;58 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;132 &nbsp;&nbsp;11 &n=
bsp;91 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;63 &=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4 &nbsp;&nbsp;&nbsp;261.6 &nbsp;&nbsp;&n=
bsp;&nbsp;16.1 &nbsp;&nbsp;162 &nbsp;&nbsp;250 &nbsp;&nbsp;239 &nbsp;&nbsp;=
170 &nbsp;&nbsp;&nbsp;6 112 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;67 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;10 &nbsp;&nbsp;&nbsp;273.=
7 &nbsp;&nbsp;&nbsp;&nbsp;83.6 &nbsp;&nbsp;&nbsp;99 &nbsp;&nbsp;&nbsp;66 &n=
bsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;95 &nbsp;&nbsp;12 &nbsp;91 da0 &n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;32 &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;21 &nbsp;&nbsp;&nbsp;129.4 &nbsp;&nbsp;&nbsp;177.9 &nbsp;&n=
bsp;223 &nbsp;&nbsp;102 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;175 &nbsp;&nb=
sp;&nbsp;5 &nbsp;97 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;48 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;16 &nbsp;&nbsp;&nbsp;191.9 &nbsp;=
&nbsp;&nbsp;261.3 &nbsp;&nbsp;173 &nbsp;&nbsp;130 &nbsp;&nbsp;&nbsp;&nbsp;0=
 &nbsp;&nbsp;162 &nbsp;&nbsp;&nbsp;9 109 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;38 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;19 &nbsp;&nb=
sp;&nbsp;152.2 &nbsp;&nbsp;&nbsp;191.3 &nbsp;&nbsp;168 &nbsp;&nbsp;&nbsp;61=
 &nbsp;&nbsp;292 &nbsp;&nbsp;139 &nbsp;&nbsp;&nbsp;8 104 da0 &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;92 &nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;0 &nbsp;&nbsp;&nbsp;366.9 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp=
;&nbsp;104 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;=
104 &nbsp;&nbsp;&nbsp;4 100 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;73 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;10 &nbsp;&nbsp;&nbsp;291.=
7 &nbsp;&nbsp;&nbsp;&nbsp;87.9 &nbsp;&nbsp;&nbsp;76 &nbsp;&nbsp;&nbsp;99 &n=
bsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;79 &nbsp;&nbsp;12 &nbsp;97 da0 &n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;49 &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;15 &nbsp;&nbsp;&nbsp;195.2 &nbsp;&nbsp;&nbsp;270.9 &nbsp;&n=
bsp;156 &nbsp;&nbsp;129 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;150 &nbsp;&nb=
sp;11 103 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;5=
3 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;15 &nbsp;&nbsp;&nbsp;212.3 &nbsp;&nbsp;&nbs=
p;248.3 &nbsp;&nbsp;139 &nbsp;&nbsp;128 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nb=
sp;137 &nbsp;&nbsp;12 &nbsp;92 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;54 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;22 &nbsp;&nbsp;&nbsp;2=
16.1 &nbsp;&nbsp;&nbsp;272.1 &nbsp;&nbsp;151 &nbsp;&nbsp;&nbsp;81 &nbsp;&nb=
sp;&nbsp;92 &nbsp;&nbsp;130 &nbsp;&nbsp;&nbsp;8 107 da0 &nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;80 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;4 &nbsp;&nbsp;&nbsp;320.9 &nbsp;&nbsp;&nbsp;&nbsp;16.0 &nbsp;&nbsp;&nb=
sp;74 &nbsp;&nbsp;201 &nbsp;&nbsp;125 &nbsp;&nbsp;&nbsp;80 &nbsp;&nbsp;&nbs=
p;3 100 da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;55 =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;10 &nbsp;&nbsp;&nbsp;218.8 &nbsp;&nbsp;&nbsp;=
&nbsp;72.9 &nbsp;&nbsp;&nbsp;89 &nbsp;&nbsp;&nbsp;73 &nbsp;&nbsp;&nbsp;&nbs=
p;0 &nbsp;&nbsp;&nbsp;87 &nbsp;&nbsp;11 &nbsp;82 ^C<br>
<br>
% zpool iostat 1<br>
capacity &nbsp;&nbsp;&nbsp;&nbsp;operations &nbsp;&nbsp;&nbsp;&nbsp;bandwid=
th<br>
pool &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alloc &nbsp;&nbsp;free &nbsp=
;&nbsp;read &nbsp;write &nbsp;&nbsp;read &nbsp;write<br>
---------- &nbsp;----- &nbsp;----- &nbsp;----- &nbsp;----- &nbsp;----- &nbs=
p;-----<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;13 &nbsp;&nbsp;&nbsp;&nbsp;16 &nbsp;&nbsp;161K &nbsp;&nbsp;50=
6K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;91 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;367K &nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;113 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;454K &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;102 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;411K &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;98 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;422K &nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;67 &nbsp;&nbsp;&nbsp;&nbsp;18 &nbsp;&nbsp;271K &nbsp;&nbsp;17=
1K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;43 &nbsp;&nbsp;&nbsp;&nbsp;16 &nbsp;&nbsp;173K &nbsp;&nbsp;25=
2K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;43 &nbsp;&nbsp;&nbsp;&nbsp;28 &nbsp;&nbsp;173K &nbsp;&nbsp;37=
6K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;78 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3 &nbsp;&nbsp;315K &nbsp;15.=
9K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;94 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;378K &nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;103 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;414K &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;102 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;658K &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;98 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;396K &nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;109 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;438K &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;101 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;404K &nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;0<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;47 &nbsp;&nbsp;&nbsp;&nbsp;13 &nbsp;&nbsp;191K &nbsp;91.4K<br=
>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;52 &nbsp;&nbsp;&nbsp;&nbsp;11 &nbsp;&nbsp;209K &nbsp;&nbsp;12=
6K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;50 &nbsp;&nbsp;&nbsp;&nbsp;20 &nbsp;&nbsp;202K &nbsp;&nbsp;30=
1K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;46 &nbsp;&nbsp;&nbsp;&nbsp;12 &nbsp;&nbsp;186K &nbsp;&nbsp;12=
8K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;86 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;346K &nbsp;3.9=
3K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;45 &nbsp;&nbsp;&nbsp;&nbsp;18 &nbsp;&nbsp;183K &nbsp;&nbsp;17=
2K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;42 &nbsp;&nbsp;&nbsp;&nbsp;15 &nbsp;&nbsp;172K &nbsp;&nbsp;34=
3K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;43 &nbsp;&nbsp;&nbsp;&nbsp;24 &nbsp;&nbsp;173K &nbsp;&nbsp;21=
1K<br>
zroot &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;93.6G &nbsp;&nbsp;818G &nbsp;&nbs=
p;&nbsp;&nbsp;87 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;596K &nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;0<br>
^C<br>
<br>
&gt;So if my observation is right it might be interesting to find out what =
is writing.<br>
would ktrace and/or truss be useful? something else? The truss -p output of=
 the<br>
find PID produces massive amounts of output, all like this:<br>
<br>
fstatat(AT_FDCWD,"5e70d5f895ccc92af6a7d5226f818b-81464.o",{ mode=3D-rw-r--r=
-- ,inode=3D367004,size=3D10312,blksize=3D10752 },AT_SYMLINK_NOFOLLOW) =3D =
0 (0x0)<br>
<br>
with the filename changing each time<br>
<br>
(later...)<br>
<br>
that file is in ccache!!!<br>
<br>
locate 5e70d5f895ccc92af6a7d5226f818b-81464.o<br>
/var/cache/ccache/f/5/5e70d5f895ccc92af6a7d5226f818b-81464.o<br>
<br>
maybe if I can exclude that dir (and /usr/obj) it'll lessen the periodic ru=
ntime.<br>
But i don't know yet whats calling find(1) when periodic daily runs. If I c=
an, I might be able to tell it not to walk certain heirarchies.<br>
<br>
&gt;I had similar issues after the number of jails on my RPI4 increased and=
 they all &gt;were doing a little bit of writing which accumulated in quite=
 a lot of writing.<br>
<br>
I'm at a loss as to what's doing the writing. The system runs the following=
:<br>
<br>
poudriere-devel # for aarch64 and armv7<br>
apcupsd &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# for ups monitorin=
g<br>
vnstat &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# bandwidth us=
e, writes to its db in /var/db/vnstat<br>
sshd<br>
exim (local)<br>
pflogd &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# right now it=
's behind a firewall, on NAT so it's not doing much<br>
pf &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;# same<br>
ntpd<br>
powerd<br>
nginx &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# this serves t=
he poudriere web frontend, and that's it (http-only) syslogd<br>
<br>
&gt;My solution was to add an SSD.<br>
<br>
I have an nfs alternative. The LAN is 1GB. But I think the fix will be to t=
ell find<br>
to not search some paths. just need to work out how to do it.<br>
<br>
What would the effect be of increasing or decreasing the txg delta with sys=
tem<br>
performance?<br>
--&nbsp;<br>
&gt;<br>
&nbsp;</div>

<hr></div>
</blockquote>
<br>
<br>
<br>
<br>
Well. You could remove daily_clean_disks_enable=3D"YES" from /etc/periodic.=
conf. That saves you the "find". I have never used it before. The default i=
s "off".<br>
<br>
$ grep clean_disks /etc/defaults/periodic.conf &nbsp;<br>
daily_clean_disks_enable=3D"NO"&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;# Delete files daily<br>
daily_clean_disks_files=3D"[#,]* .#* a.out *.core *.CKP .emacs_[0-9]*"<br>
daily_clean_disks_days=3D3&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp=
; &nbsp;&nbsp;&nbsp; &nbsp;# If older than this<br>
<br>
The list of files it checks for doesn't look very useful to me in 2023. Thi=
s does do a full find over *all* directories and files. *every day* ???<br>
If you have a lot of *.core files you are better of putting this in sysctl.=
conf: kern.corefile=3D/var/tmp/%U.%N.%I.%P.core . So you know where to look=
 to delete them.<br>
Actually my RPI3 has this in cron: @daily&nbsp; find /var/tmp/ -name "*.cor=
e" -mtime +7 -ls -delete .<br>
<br>
About the vfs.zfs.txg.timeout. Don't mess with it if you don't have a solid=
 reason. It used to be 30 seconds when ZFS was new if I remember correctly.=
 But it could give too big write bursts which paused the system noticeably.=
 This is all from memory. I might be totally wrong here. :-)<br>
<br>
The stats of your system look pretty reasonable. Also the zpool iostat 1 in=
dicates periods of no writes. So that is ok. You just have a lot of IOPS fo=
r 1 spinning disk when everything runs together. Poudriere &amp; ccache ind=
icate that you are using it for compiling pkgs or other stuff. That is pret=
ty heavy for you setup if you manage to run things in parallel as the RPI4 =
has 4CPUs. It doesn't help to run daily_cleanup together. ;-)<br>
<br>
Regards,<br>
Ronald.<br>
<br>
<br>
&nbsp;</body></html>
------=_Part_8408_2001362475.1698420229883--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1210534753.8409.1698420229888>