Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 30 Oct 2023 15:15:11 +0100 (CET)
From:      Ronald Klop <ronald-lists@klop.ws>
To:        void <void@f-m.fm>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: periodic daily takes a very long time to run (14-stable)
Message-ID:  <1189591588.6325.1698675311830@localhost>
In-Reply-To: <ZTvnikI1ivLDtCYP@int21h>
References:  <ZTuNvVMW_XG3mZKU@int21h> <1122335317.4913.1698407124469@localhost> <ZTuyXPjddEPqh-bi@int21h> <794932758.6659.1698413675475@localhost> <ZTvMODY-mcBImHZP@int21h> <1210534753.8409.1698420229888@localhost> <ZTvnikI1ivLDtCYP@int21h>

next in thread | previous in thread | raw e-mail | index | archive | help
------=_Part_6324_617361068.1698675311818
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit

Van: void <void@f-m.fm>
Datum: vrijdag, 27 oktober 2023 18:38
Aan: freebsd-stable@freebsd.org
Onderwerp: Re: periodic daily takes a very long time to run (14-stable)
> 
> On Fri, Oct 27, 2023 at 05:23:49PM +0200, Ronald Klop wrote:
> 
> >Well. You could remove daily_clean_disks_enable="YES" from /etc/periodic.conf. >That saves you the "find". I have never used it before. The default is "off".
> 
> Yes, I'll try that, but it's a very recent addition. The periodic daily problem
> is something thats been happening since even before the machine went from 13-stable to 14-stable. The addition of daily_clean_disks has made
> a bad problem worse, because rather than the problem happening for ~1 hour
> it happens for 3.
> 
> periodic daily should be finished soon, it's moved onto /usr/local/etc/periodic/security/460.pkg-checksum
> 
> edit: it finished, took exactly 3 hrs:
> # date && periodic daily && date
> Fri Oct 27 13:40:15 BST 2023
> Fri Oct 27 16:40:14 BST 2023
> 
> >The list of files it checks for doesn't look very useful to me in 2023. >This does do a full find over *all* directories and files. *every day* ???
> 
> yeah that was sort of my reaction. I've not looked yet for a monthly-clean_disks where I could define an exclude pattern for things like
> ccache. That to me would be useful.
> 
> The reason it was enabled was no more than a "sounds like a good idea"
> variable to enable on a machine that is presently used mainly
> for poudriere, which can tend to generate a lot of core consequential to
> pkg build failures etc.
> 
> >If you have a lot of *.core files you are better of putting this in >sysctl.conf: kern.corefile=/var/tmp/%U.%N.%I.%P.core . >So you know where to look to delete them.
> 
> >Actually my RPI3 has this in cron: @daily  find /var/tmp/ -name "*.core" -mtime +7 -ls -delete .
> 
> thanks for these, have implemented both
> 
> >That is pretty heavy for you setup if you manage to run things in parallel as the RPI4 >has 4CPUs.
> PARALLEL_JOBS=1
> TMPFS=ALL # with excludes for things like llvm & rust & gcc
> 
> >It doesn't help to run daily_cleanup together. ;-)
> 
> Before I started asking about the issue, what I was trying to address/work around was that sometimes (if for example the poudriere run went over 24 hrs) it would run into periodic daily. This caused problems before I made it worse by adding
> daily cleanup LOL
> 
> Now to test periodic daily without daily cleanup...
> 
> How long does yours take?
> -- 
>  
> 
> 
> 


Mine takes:

[root@rpi4 ~]# date && periodic daily && date
Mon Oct 30 14:35:53 CET 2023
Mon Oct 30 14:54:18 CET 2023

=========================================
[root@rpi4 ~]# cat /etc/periodic.conf
daily_output="/var/log/daily.log"
daily_status_security_output="/var/log/daily.log"

weekly_output="/var/log/weekly.log"
weekly_status_security_output="/var/log/weekly.log"

monthly_output="/var/log/monthly.log"
monthly_status_security_output="/var/log/monthly.log"

# 223.backup-zfs
daily_backup_zfs_enable="YES"                           # Backup output from zpool/zfs list
daily_backup_zfs_props_enable="YES"                     # Backup zpool/zfs filesystem properties
daily_backup_zfs_verbose="YES"                          # Report diff between the old and new backups.
# 404.status-zfs
daily_status_zfs_enable="YES"                           # Check ZFS
# 800.scrub-zfs
daily_scrub_zfs_enable="YES"

=========================================

So comparable to your "fixed" daily.

I can probably gain some speed by setting exec/setuid off on some ZFS volumes too.

Regards,
Ronald.
 
------=_Part_6324_617361068.1698675311818
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<html><head></head><body><br>
<p><strong>Van:</strong> void &lt;void@f-m.fm&gt;<br>
<strong>Datum:</strong> vrijdag, 27 oktober 2023 18:38<br>
<strong>Aan:</strong> freebsd-stable@freebsd.org<br>
<strong>Onderwerp:</strong> Re: periodic daily takes a very long time to run (14-stable)</p>

<blockquote style="padding-right: 0px; padding-left: 5px; margin-left: 5px; border-left: #000000 2px solid; margin-right: 0px">
<div class="MessageRFC822Viewer" id="P">
<div class="TextPlainViewer" id="P.P">On Fri, Oct 27, 2023 at 05:23:49PM +0200, Ronald Klop wrote:<br>
<br>
&gt;Well. You could remove daily_clean_disks_enable="YES" from /etc/periodic.conf. &gt;That saves you the "find". I have never used it before. The default is "off".<br>
<br>
Yes, I'll try that, but it's a very recent addition. The periodic daily problem<br>
is something thats been happening since even before the machine went from 13-stable to 14-stable. The addition of daily_clean_disks has made<br>
a bad problem worse, because rather than the problem happening for ~1 hour<br>
it happens for 3.<br>
<br>
periodic daily should be finished soon, it's moved onto /usr/local/etc/periodic/security/460.pkg-checksum<br>
<br>
edit: it finished, took exactly 3 hrs:<br>
# date &amp;&amp; periodic daily &amp;&amp; date<br>
Fri Oct 27 13:40:15 BST 2023<br>
Fri Oct 27 16:40:14 BST 2023<br>
<br>
&gt;The list of files it checks for doesn't look very useful to me in 2023. &gt;This does do a full find over *all* directories and files. *every day* ???<br>
<br>
yeah that was sort of my reaction. I've not looked yet for a monthly-clean_disks where I could define an exclude pattern for things like<br>
ccache. That to me would be useful.<br>
<br>
The reason it was enabled was no more than a "sounds like a good idea"<br>
variable to enable on a machine that is presently used mainly<br>
for poudriere, which can tend to generate a lot of core consequential to<br>
pkg build failures etc.<br>
<br>
&gt;If you have a lot of *.core files you are better of putting this in &gt;sysctl.conf: kern.corefile=/var/tmp/%U.%N.%I.%P.core . &gt;So you know where to look to delete them.<br>
<br>
&gt;Actually my RPI3 has this in cron: @daily &nbsp;find /var/tmp/ -name "*.core" -mtime +7 -ls -delete .<br>
<br>
thanks for these, have implemented both<br>
<br>
&gt;That is pretty heavy for you setup if you manage to run things in parallel as the RPI4 &gt;has 4CPUs.<br>
PARALLEL_JOBS=1<br>
TMPFS=ALL # with excludes for things like llvm &amp; rust &amp; gcc<br>
<br>
&gt;It doesn't help to run daily_cleanup together. ;-)<br>
<br>
Before I started asking about the issue, what I was trying to address/work around was that sometimes (if for example the poudriere run went over 24 hrs) it would run into periodic daily. This caused problems before I made it worse by adding<br>
daily cleanup LOL<br>
<br>
Now to test periodic daily without daily cleanup...<br>
<br>
How long does yours take?<br>
--&nbsp;<br>
&nbsp;</div>

<hr></div>
</blockquote>
<br>
<br>
Mine takes:<br>
<br>
[root@rpi4 ~]# date &amp;&amp; periodic daily &amp;&amp; date<br>
Mon Oct 30 14:35:53 CET 2023<br>
Mon Oct 30 14:54:18 CET 2023<br>
<br>
=========================================<br>
[root@rpi4 ~]# cat /etc/periodic.conf<br>
daily_output="/var/log/daily.log"<br>
daily_status_security_output="/var/log/daily.log"<br>
<br>
weekly_output="/var/log/weekly.log"<br>
weekly_status_security_output="/var/log/weekly.log"<br>
<br>
monthly_output="/var/log/monthly.log"<br>
monthly_status_security_output="/var/log/monthly.log"<br>
<br>
# 223.backup-zfs<br>
daily_backup_zfs_enable="YES"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; # Backup output from zpool/zfs list<br>
daily_backup_zfs_props_enable="YES"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; # Backup zpool/zfs filesystem properties<br>
daily_backup_zfs_verbose="YES"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; # Report diff between the old and new backups.<br>
# 404.status-zfs<br>
daily_status_zfs_enable="YES"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; # Check ZFS<br>
# 800.scrub-zfs<br>
daily_scrub_zfs_enable="YES"<br>
<br>
=========================================<br>
<br>
So comparable to your "fixed" daily.<br>
<br>
I can probably gain some speed by setting exec/setuid off on some ZFS volumes too.<br>
<br>
Regards,<br>
Ronald.<br>
&nbsp;</body></html>
------=_Part_6324_617361068.1698675311818--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1189591588.6325.1698675311830>