From nobody Mon Oct 30 14:15:11 2023 X-Original-To: freebsd-stable@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4SJwK53bR6z4y007 for ; Mon, 30 Oct 2023 14:15:21 +0000 (UTC) (envelope-from SRS0=GIGA=GM=klop.ws=ronald-lists@realworks.nl) Received: from smtp-relay-int.realworks.nl (smtp-relay-int.realworks.nl [194.109.157.24]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4SJwK34wY6z3FvR for ; Mon, 30 Oct 2023 14:15:19 +0000 (UTC) (envelope-from SRS0=GIGA=GM=klop.ws=ronald-lists@realworks.nl) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=klop.ws header.s=rw2 header.b=eG3DSWi9; spf=pass (mx1.freebsd.org: domain of "SRS0=GIGA=GM=klop.ws=ronald-lists@realworks.nl" designates 194.109.157.24 as permitted sender) smtp.mailfrom="SRS0=GIGA=GM=klop.ws=ronald-lists@realworks.nl"; dmarc=pass (policy=quarantine) header.from=klop.ws Date: Mon, 30 Oct 2023 15:15:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=klop.ws; s=rw2; t=1698675312; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=cOURoCEssnEROw1i3RRw+9KxKlC74HPX5xAuPdsehCE=; b=eG3DSWi9ULPcbqa9Cl5MQ8OSALD1R4SUzJcKMl+f2m0456xTL/Qd4NJjM2lrhaHfVL3iyL Rzvh8otB7uyIM9y0j9u/V9ZVTCM25obaaF4n3xYvyJQCkzrhQDkWmBIuWuIkX7kDJACEke 8+h8sGszWFo+IxY7TCYMhqxvyFt5RkGKFOMq/LLG9RUZVvQFcSFJPj3UusEJ5afSQwfk2Q K8O60zj5ZavAo2GQY6AbzTENKLM+U9H66ZIyILuVO2oGxMlNMl1Jzj3wLKwqklQNjP0IUy dLd1CoMx0c4zm2xjylsjp1ifDhZLbWf+FqCdkeho8YChrpCIbQcvDiRjxPeIZw== From: Ronald Klop To: void Cc: freebsd-stable@freebsd.org Message-ID: <1189591588.6325.1698675311830@localhost> In-Reply-To: References: <1122335317.4913.1698407124469@localhost> <794932758.6659.1698413675475@localhost> <1210534753.8409.1698420229888@localhost> Subject: Re: periodic daily takes a very long time to run (14-stable) List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-stable@freebsd.org X-BeenThere: freebsd-stable@freebsd.org MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_6324_617361068.1698675311818" X-Mailer: Realworks (677.7) Importance: Normal X-Priority: 3 (Normal) X-Spamd-Result: default: False [-3.20 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-0.999]; DMARC_POLICY_ALLOW(-0.50)[klop.ws,quarantine]; MID_RHS_NOT_FQDN(0.50)[]; FORGED_SENDER(0.30)[ronald-lists@klop.ws,SRS0=GIGA=GM=klop.ws=ronald-lists@realworks.nl]; R_SPF_ALLOW(-0.20)[+ip4:194.109.157.0/24]; R_DKIM_ALLOW(-0.20)[klop.ws:s=rw2]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; MLMMJ_DEST(0.00)[freebsd-stable@freebsd.org]; MIME_TRACE(0.00)[0:+,1:+,2:~]; FREEMAIL_TO(0.00)[f-m.fm]; RCVD_COUNT_ZERO(0.00)[0]; ARC_NA(0.00)[]; DKIM_TRACE(0.00)[klop.ws:+]; RCPT_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; HAS_X_PRIO_THREE(0.00)[3]; TO_DN_SOME(0.00)[]; ASN(0.00)[asn:3265, ipnet:194.109.0.0/16, country:NL]; TO_MATCH_ENVRCPT_SOME(0.00)[]; FROM_NEQ_ENVFROM(0.00)[ronald-lists@klop.ws,SRS0=GIGA=GM=klop.ws=ronald-lists@realworks.nl] X-Rspamd-Queue-Id: 4SJwK34wY6z3FvR X-Spamd-Bar: --- ------=_Part_6324_617361068.1698675311818 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Van: void Datum: vrijdag, 27 oktober 2023 18:38 Aan: freebsd-stable@freebsd.org Onderwerp: Re: periodic daily takes a very long time to run (14-stable) > > On Fri, Oct 27, 2023 at 05:23:49PM +0200, Ronald Klop wrote: > > >Well. You could remove daily_clean_disks_enable="YES" from /etc/periodic.conf. >That saves you the "find". I have never used it before. The default is "off". > > Yes, I'll try that, but it's a very recent addition. The periodic daily problem > is something thats been happening since even before the machine went from 13-stable to 14-stable. The addition of daily_clean_disks has made > a bad problem worse, because rather than the problem happening for ~1 hour > it happens for 3. > > periodic daily should be finished soon, it's moved onto /usr/local/etc/periodic/security/460.pkg-checksum > > edit: it finished, took exactly 3 hrs: > # date && periodic daily && date > Fri Oct 27 13:40:15 BST 2023 > Fri Oct 27 16:40:14 BST 2023 > > >The list of files it checks for doesn't look very useful to me in 2023. >This does do a full find over *all* directories and files. *every day* ??? > > yeah that was sort of my reaction. I've not looked yet for a monthly-clean_disks where I could define an exclude pattern for things like > ccache. That to me would be useful. > > The reason it was enabled was no more than a "sounds like a good idea" > variable to enable on a machine that is presently used mainly > for poudriere, which can tend to generate a lot of core consequential to > pkg build failures etc. > > >If you have a lot of *.core files you are better of putting this in >sysctl.conf: kern.corefile=/var/tmp/%U.%N.%I.%P.core . >So you know where to look to delete them. > > >Actually my RPI3 has this in cron: @daily find /var/tmp/ -name "*.core" -mtime +7 -ls -delete . > > thanks for these, have implemented both > > >That is pretty heavy for you setup if you manage to run things in parallel as the RPI4 >has 4CPUs. > PARALLEL_JOBS=1 > TMPFS=ALL # with excludes for things like llvm & rust & gcc > > >It doesn't help to run daily_cleanup together. ;-) > > Before I started asking about the issue, what I was trying to address/work around was that sometimes (if for example the poudriere run went over 24 hrs) it would run into periodic daily. This caused problems before I made it worse by adding > daily cleanup LOL > > Now to test periodic daily without daily cleanup... > > How long does yours take? > -- > > > > Mine takes: [root@rpi4 ~]# date && periodic daily && date Mon Oct 30 14:35:53 CET 2023 Mon Oct 30 14:54:18 CET 2023 ========================================= [root@rpi4 ~]# cat /etc/periodic.conf daily_output="/var/log/daily.log" daily_status_security_output="/var/log/daily.log" weekly_output="/var/log/weekly.log" weekly_status_security_output="/var/log/weekly.log" monthly_output="/var/log/monthly.log" monthly_status_security_output="/var/log/monthly.log" # 223.backup-zfs daily_backup_zfs_enable="YES" # Backup output from zpool/zfs list daily_backup_zfs_props_enable="YES" # Backup zpool/zfs filesystem properties daily_backup_zfs_verbose="YES" # Report diff between the old and new backups. # 404.status-zfs daily_status_zfs_enable="YES" # Check ZFS # 800.scrub-zfs daily_scrub_zfs_enable="YES" ========================================= So comparable to your "fixed" daily. I can probably gain some speed by setting exec/setuid off on some ZFS volumes too. Regards, Ronald. ------=_Part_6324_617361068.1698675311818 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit

Van: void <void@f-m.fm>
Datum: vrijdag, 27 oktober 2023 18:38
Aan: freebsd-stable@freebsd.org
Onderwerp: Re: periodic daily takes a very long time to run (14-stable)

On Fri, Oct 27, 2023 at 05:23:49PM +0200, Ronald Klop wrote:

>Well. You could remove daily_clean_disks_enable="YES" from /etc/periodic.conf. >That saves you the "find". I have never used it before. The default is "off".

Yes, I'll try that, but it's a very recent addition. The periodic daily problem
is something thats been happening since even before the machine went from 13-stable to 14-stable. The addition of daily_clean_disks has made
a bad problem worse, because rather than the problem happening for ~1 hour
it happens for 3.

periodic daily should be finished soon, it's moved onto /usr/local/etc/periodic/security/460.pkg-checksum

edit: it finished, took exactly 3 hrs:
# date && periodic daily && date
Fri Oct 27 13:40:15 BST 2023
Fri Oct 27 16:40:14 BST 2023

>The list of files it checks for doesn't look very useful to me in 2023. >This does do a full find over *all* directories and files. *every day* ???

yeah that was sort of my reaction. I've not looked yet for a monthly-clean_disks where I could define an exclude pattern for things like
ccache. That to me would be useful.

The reason it was enabled was no more than a "sounds like a good idea"
variable to enable on a machine that is presently used mainly
for poudriere, which can tend to generate a lot of core consequential to
pkg build failures etc.

>If you have a lot of *.core files you are better of putting this in >sysctl.conf: kern.corefile=/var/tmp/%U.%N.%I.%P.core . >So you know where to look to delete them.

>Actually my RPI3 has this in cron: @daily  find /var/tmp/ -name "*.core" -mtime +7 -ls -delete .

thanks for these, have implemented both

>That is pretty heavy for you setup if you manage to run things in parallel as the RPI4 >has 4CPUs.
PARALLEL_JOBS=1
TMPFS=ALL # with excludes for things like llvm & rust & gcc

>It doesn't help to run daily_cleanup together. ;-)

Before I started asking about the issue, what I was trying to address/work around was that sometimes (if for example the poudriere run went over 24 hrs) it would run into periodic daily. This caused problems before I made it worse by adding
daily cleanup LOL

Now to test periodic daily without daily cleanup...

How long does yours take?
-- 
 



Mine takes:

[root@rpi4 ~]# date && periodic daily && date
Mon Oct 30 14:35:53 CET 2023
Mon Oct 30 14:54:18 CET 2023

=========================================
[root@rpi4 ~]# cat /etc/periodic.conf
daily_output="/var/log/daily.log"
daily_status_security_output="/var/log/daily.log"

weekly_output="/var/log/weekly.log"
weekly_status_security_output="/var/log/weekly.log"

monthly_output="/var/log/monthly.log"
monthly_status_security_output="/var/log/monthly.log"

# 223.backup-zfs
daily_backup_zfs_enable="YES"                           # Backup output from zpool/zfs list
daily_backup_zfs_props_enable="YES"                     # Backup zpool/zfs filesystem properties
daily_backup_zfs_verbose="YES"                          # Report diff between the old and new backups.
# 404.status-zfs
daily_status_zfs_enable="YES"                           # Check ZFS
# 800.scrub-zfs
daily_scrub_zfs_enable="YES"

=========================================

So comparable to your "fixed" daily.

I can probably gain some speed by setting exec/setuid off on some ZFS volumes too.

Regards,
Ronald.
  ------=_Part_6324_617361068.1698675311818--