Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Oct 2023 15:34:35 +0200 (CEST)
From:      Ronald Klop <ronald-lists@klop.ws>
To:        void <void@f-m.fm>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: periodic daily takes a very long time to run (14-stable)
Message-ID:  <794932758.6659.1698413675475@localhost>
In-Reply-To: <ZTuyXPjddEPqh-bi@int21h>
References:  <ZTuNvVMW_XG3mZKU@int21h> <1122335317.4913.1698407124469@localhost> <ZTuyXPjddEPqh-bi@int21h>

next in thread | previous in thread | raw e-mail | index | archive | help
------=_Part_6658_1428730899.1698413675449
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit

Van: void <void@f-m.fm>
Datum: vrijdag, 27 oktober 2023 14:51
Aan: freebsd-stable@freebsd.org
Onderwerp: Re: periodic daily takes a very long time to run (14-stable)
> 
> On Fri, Oct 27, 2023 at 01:45:24PM +0200, Ronald Klop wrote:
> 
> >Can you run "gstat" or "iostat -x -d 1" to see how busy your disk is? >And how much bandwidth is uses.
> 
> when periodic is *not* running, gstat numbers fluctuate between whats indicated below and all zeros:
> 
> dT: 1.011s  w: 1.000s
>   L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
>      0     69      0      0    0.0     67    732    0.6   24.3| da0
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p1
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p2
>      0     69      0      0    0.0     67    732    0.6   24.3| da0p3
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p2.eli
>      0     69      0      0    0.0     67    732    0.8   24.4| da0p3.eli
> 
> # iostat -x -d 1
>         extended device statistics  
> device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b  
> da0           15      18    189.4    565.1    47     7   144    27    0  23 pass0          0       0      0.0      0.0    54     0   154    79    0   0         extended device statistics  
> device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b  
> da0            3      70    172.6    698.0    26     1   129     5    0  33 pass0          0       0      0.0      0.0     0     0     0     0    0   0         extended device statistics  
> device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b  
> da0            0       0      0.0      0.0     0     0     0     0    0   0 pass0          0       0      0.0      0.0     0     0     0     0    0   0
> (then all 0s repeatedly)
> 
> When periodic *is* running:
> 
> dT: 1.047s  w: 1.000s
>   L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
>      2     89     34    138   88.1     54    783   33.9   95.0| da0
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p1
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p2
>      2     89     34    138   88.1     54    783   34.0   95.0| da0p3
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p2.eli
>      2     90     35    141   86.3     54    783   34.2   96.9| da0p3.eli
> 
> the kBps number varies wildly between 100 and 1200. %busy is 92-102%
> 
> # iostat -x -d 1
>                          extended device statistics  
> device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b  
> da0           15      18    189.6    564.1    47     7   144    27   10  23 pass0          0       0      0.0      0.0    54     0   154    79    0   0
> Numbers vary wildly here, too. I wonder if theres a switch for iostat that lets it
> run for say 1000 seconds then produces an average of the values gathered.
> 
> I'll look for one
> -- 
>  
> 
> 
> 


What stands out to me is that you do quite a lot of writes on the disk. (I might be mistaken.)
The max. number of IOPS for HDD is around 80 for consumer grade harddisks. I think this counts for USB connected disks.
https://en.wikipedia.org/wiki/IOPS#Mechanical_hard_drives
>From the stats you posted it looks like you are almost always doing 50+ writes/second already. That does not leave much IOPS for the find process.

ZFS tries to bundle the writes every 5 seconds to leave room for the reads. See "sysctl vfs.zfs.txg.timeout". Unless it has too much data to write or a sync request comes in.
#device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b
$ iostat -x  -w 1 da1 | grep da1
da1            6      19     94.0    525.0    15     2   123     8    1  12
da1            0       4      0.0     15.2     0     1   122    41    0  23
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0       2      0.0    738.7     0     4    61    23    0   7
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0      98      0.0   2850.9     0     1     0     1    1   7
da1            0       4      0.0     16.0     0     1   139    47    0  28
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0      80      0.0   2361.3     0     1     0     1    1   6
da1            0       4      0.0     15.3     0     1   125    42    0  24
Every 5th row is a spike in writes on my RPI4. "zpool iostat 1" should give a similar pattern.

So if my observation is right it might be interesting to find out what is writing. I had similar issues after the number of jails on my RPI4 increased and they all were doing a little bit of writing which accumulated in quite a lot of writing. My solution was to add an SSD.

Regards,
Ronald.
 
------=_Part_6658_1428730899.1698413675449
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<html><head></head><body><br>
<p><strong>Van:</strong> void &lt;void@f-m.fm&gt;<br>
<strong>Datum:</strong> vrijdag, 27 oktober 2023 14:51<br>
<strong>Aan:</strong> freebsd-stable@freebsd.org<br>
<strong>Onderwerp:</strong> Re: periodic daily takes a very long time to run (14-stable)</p>

<blockquote style="padding-right: 0px; padding-left: 5px; margin-left: 5px; border-left: #000000 2px solid; margin-right: 0px">
<div class="MessageRFC822Viewer" id="P">
<div class="TextPlainViewer" id="P.P">On Fri, Oct 27, 2023 at 01:45:24PM +0200, Ronald Klop wrote:<br>
<br>
&gt;Can you run "gstat" or "iostat -x -d 1" to see how busy your disk is? &gt;And how much bandwidth is uses.<br>
<br>
when periodic is *not* running, gstat numbers fluctuate between whats indicated below and all zeros:<br>
<br>
dT: 1.011s &nbsp;w: 1.000s<br>
&nbsp;&nbsp;L(q) &nbsp;ops/s &nbsp;&nbsp;&nbsp;r/s &nbsp;&nbsp;kBps &nbsp;&nbsp;ms/r &nbsp;&nbsp;&nbsp;w/s &nbsp;&nbsp;kBps &nbsp;&nbsp;ms/w &nbsp;&nbsp;%busy Name<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;69 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;67 &nbsp;&nbsp;&nbsp;732 &nbsp;&nbsp;&nbsp;0.6 &nbsp;&nbsp;24.3| da0<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;0.0| da0p1<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;0.0| da0p2<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;69 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;67 &nbsp;&nbsp;&nbsp;732 &nbsp;&nbsp;&nbsp;0.6 &nbsp;&nbsp;24.3| da0p3<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;0.0| da0p2.eli<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;69 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;67 &nbsp;&nbsp;&nbsp;732 &nbsp;&nbsp;&nbsp;0.8 &nbsp;&nbsp;24.4| da0p3.eli<br>
<br>
# iostat -x -d 1<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;extended device statistics &nbsp;<br>
device &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;r/s &nbsp;&nbsp;&nbsp;&nbsp;w/s &nbsp;&nbsp;&nbsp;&nbsp;kr/s &nbsp;&nbsp;&nbsp;&nbsp;kw/s &nbsp;ms/r &nbsp;ms/w &nbsp;ms/o &nbsp;ms/t qlen &nbsp;%b &nbsp;<br>
da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;15 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18 &nbsp;&nbsp;&nbsp;189.4 &nbsp;&nbsp;&nbsp;565.1 &nbsp;&nbsp;&nbsp;47 &nbsp;&nbsp;&nbsp;&nbsp;7 &nbsp;&nbsp;144 &nbsp;&nbsp;&nbsp;27 &nbsp;&nbsp;&nbsp;0 &nbsp;23 pass0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;54 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;154 &nbsp;&nbsp;&nbsp;79 &nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;extended device statistics &nbsp;<br>
device &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;r/s &nbsp;&nbsp;&nbsp;&nbsp;w/s &nbsp;&nbsp;&nbsp;&nbsp;kr/s &nbsp;&nbsp;&nbsp;&nbsp;kw/s &nbsp;ms/r &nbsp;ms/w &nbsp;ms/o &nbsp;ms/t qlen &nbsp;%b &nbsp;<br>
da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;70 &nbsp;&nbsp;&nbsp;172.6 &nbsp;&nbsp;&nbsp;698.0 &nbsp;&nbsp;&nbsp;26 &nbsp;&nbsp;&nbsp;&nbsp;1 &nbsp;&nbsp;129 &nbsp;&nbsp;&nbsp;&nbsp;5 &nbsp;&nbsp;&nbsp;0 &nbsp;33 pass0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;extended device statistics &nbsp;<br>
device &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;r/s &nbsp;&nbsp;&nbsp;&nbsp;w/s &nbsp;&nbsp;&nbsp;&nbsp;kr/s &nbsp;&nbsp;&nbsp;&nbsp;kw/s &nbsp;ms/r &nbsp;ms/w &nbsp;ms/o &nbsp;ms/t qlen &nbsp;%b &nbsp;<br>
da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;0 pass0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;0<br>
(then all 0s repeatedly)<br>
<br>
When periodic *is* running:<br>
<br>
dT: 1.047s &nbsp;w: 1.000s<br>
&nbsp;&nbsp;L(q) &nbsp;ops/s &nbsp;&nbsp;&nbsp;r/s &nbsp;&nbsp;kBps &nbsp;&nbsp;ms/r &nbsp;&nbsp;&nbsp;w/s &nbsp;&nbsp;kBps &nbsp;&nbsp;ms/w &nbsp;&nbsp;%busy Name<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2 &nbsp;&nbsp;&nbsp;&nbsp;89 &nbsp;&nbsp;&nbsp;&nbsp;34 &nbsp;&nbsp;&nbsp;138 &nbsp;&nbsp;88.1 &nbsp;&nbsp;&nbsp;&nbsp;54 &nbsp;&nbsp;&nbsp;783 &nbsp;&nbsp;33.9 &nbsp;&nbsp;95.0| da0<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;0.0| da0p1<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;0.0| da0p2<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2 &nbsp;&nbsp;&nbsp;&nbsp;89 &nbsp;&nbsp;&nbsp;&nbsp;34 &nbsp;&nbsp;&nbsp;138 &nbsp;&nbsp;88.1 &nbsp;&nbsp;&nbsp;&nbsp;54 &nbsp;&nbsp;&nbsp;783 &nbsp;&nbsp;34.0 &nbsp;&nbsp;95.0| da0p3<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;0.0| da0p2.eli<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2 &nbsp;&nbsp;&nbsp;&nbsp;90 &nbsp;&nbsp;&nbsp;&nbsp;35 &nbsp;&nbsp;&nbsp;141 &nbsp;&nbsp;86.3 &nbsp;&nbsp;&nbsp;&nbsp;54 &nbsp;&nbsp;&nbsp;783 &nbsp;&nbsp;34.2 &nbsp;&nbsp;96.9| da0p3.eli<br>
<br>
the kBps number varies wildly between 100 and 1200. %busy is 92-102%<br>
<br>
# iostat -x -d 1<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;extended device statistics &nbsp;<br>
device &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;r/s &nbsp;&nbsp;&nbsp;&nbsp;w/s &nbsp;&nbsp;&nbsp;&nbsp;kr/s &nbsp;&nbsp;&nbsp;&nbsp;kw/s &nbsp;ms/r &nbsp;ms/w &nbsp;ms/o &nbsp;ms/t qlen &nbsp;%b &nbsp;<br>
da0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;15 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18 &nbsp;&nbsp;&nbsp;189.6 &nbsp;&nbsp;&nbsp;564.1 &nbsp;&nbsp;&nbsp;47 &nbsp;&nbsp;&nbsp;&nbsp;7 &nbsp;&nbsp;144 &nbsp;&nbsp;&nbsp;27 &nbsp;&nbsp;10 &nbsp;23 pass0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0.0 &nbsp;&nbsp;&nbsp;54 &nbsp;&nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;154 &nbsp;&nbsp;&nbsp;79 &nbsp;&nbsp;&nbsp;0 &nbsp;&nbsp;0<br>
Numbers vary wildly here, too. I wonder if theres a switch for iostat that lets it<br>
run for say 1000 seconds then produces an average of the values gathered.<br>
<br>
I'll look for one<br>
--&nbsp;<br>
&nbsp;</div>

<hr></div>
</blockquote>
<br>
<br>
What stands out to me is that you do quite a lot of writes on the disk. (I might be mistaken.)<br>
The max. number of IOPS for HDD is around 80 for consumer grade harddisks. I think this counts for USB connected disks.<br>
https://en.wikipedia.org/wiki/IOPS#Mechanical_hard_drives<br>;
>From the stats you posted it looks like you are almost always doing 50+ writes/second already. That does not leave much IOPS for the find process.<br>
<br>
ZFS tries to bundle the writes every 5 seconds to leave room for the reads. See "sysctl vfs.zfs.txg.timeout". Unless it has too much data to write or a sync request comes in.
<pre>#device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b
$ iostat -x  -w 1 da1 | grep da1
da1            6      19     94.0    525.0    15     2   123     8    1  12
da1            0       4      0.0     15.2     0     1   122    41    0  23
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0       2      0.0    738.7     0     4    61    23    0   7
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0      98      0.0   2850.9     0     1     0     1    1   7
da1            0       4      0.0     16.0     0     1   139    47    0  28
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0       0      0.0      0.0     0     0     0     0    0   0
da1            0      80      0.0   2361.3     0     1     0     1    1   6
da1            0       4      0.0     15.3     0     1   125    42    0  24</pre>
<br>
Every 5th row is a spike in writes on my RPI4. "zpool iostat 1" should give a similar pattern.<br>
<br>
So if my observation is right it might be interesting to find out what is writing. I had similar issues after the number of jails on my RPI4 increased and they all were doing a little bit of writing which accumulated in quite a lot of writing. My solution was to add an SSD.<br>
<br>
Regards,<br>
Ronald.<br>
&nbsp;</body></html>
------=_Part_6658_1428730899.1698413675449--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?794932758.6659.1698413675475>