Date: Fri, 27 Oct 2023 14:46:39 +0200 (CEST) From: Ronald Klop <ronald-lists@klop.ws> To: void <void@f-m.fm> Cc: freebsd-stable@freebsd.org Subject: Re: periodic daily takes a very long time to run (14-stable) Message-ID: <2146377145.5323.1698410799235@localhost> In-Reply-To: <ZTutTkjadP3da0wa@int21h> References: <ZTuNvVMW_XG3mZKU@int21h> <1122335317.4913.1698407124469@localhost> <ZTutTkjadP3da0wa@int21h>
next in thread | previous in thread | raw e-mail | index | archive | help
------=_Part_5322_450711279.1698410799210 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Van: void <void@f-m.fm> Datum: vrijdag, 27 oktober 2023 14:30 Aan: freebsd-stable@freebsd.org Onderwerp: Re: periodic daily takes a very long time to run (14-stable) > > Hi, > > On Fri, Oct 27, 2023 at 01:45:24PM +0200, Ronald Klop wrote: > > >Can you run "gstat" or "iostat -x -d 1" to see how busy your disk is? >And how much bandwidth is uses. > > > >The output of "zpool status", "zpool list" and "zfs list" can also >be interesting. > > > >ZFS is known to become slow when the zpool is full almost full. > > OK. It's just finished the periodic daily I wrote about initially > > # date && periodic daily && date > Fri Oct 27 10:12:23 BST 2023 > Fri Oct 27 13:12:09 BST 2023 > > so almost exactly 3 hrs. > > Regarding gstat/iostat - do you mean when periodic is running, not running, > both? > > Regarding space used: > > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD > zroot 790G 93.6G 0B 96K 0B 93.6G > zroot/ROOT 790G 64.1G 0B 96K 0B 64.1G > > zpool status -v > > # zpool status -v > pool: zroot > state: ONLINE > scan: scrub repaired 0B in 03:50:52 with 0 errors on Sat Oct 21 20:53:27 2023 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > da0p3.eli ONLINE 0 0 0 > > errors: No known data errors > > # zpool list > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > zroot 912G 93.6G 818G - - 21% 10% 1.00x ONLINE - > > # zfs list > NAME USED AVAIL REFER MOUNTPOINT > zroot 93.6G 790G 96K /zroot > zroot/ROOT 64.1G 790G 96K none > zroot/ROOT/13.1-RELEASE-p4_2022-12-01_063800 8K 790G 11.3G / > zroot/ROOT/13.1-RELEASE-p5_2023-02-03_232552 8K 790G 27.4G / > zroot/ROOT/13.1-RELEASE-p5_2023-02-09_153529 8K 790G 27.9G / > zroot/ROOT/13.1-RELEASE-p6_2023-02-18_024922 8K 790G 33.4G / > zroot/ROOT/13.1-RELEASE_2022-11-17_165717 8K 790G 791M / > zroot/ROOT/default 64.1G 790G 14.4G / > zroot/distfiles 2.96G 790G 2.96G /usr/ports/distfiles > zroot/postgres 96K 790G 96K /var/db/postgres > zroot/poudriere 4.17G 790G 104K /zroot/poudriere > zroot/poudriere/jails 3.30G 790G 96K /zroot/poudriere/jails > zroot/poudriere/jails/140R-rpi2b 1.03G 790G 1.03G /usr/local/poudriere/jails/140R-rpi2b > zroot/poudriere/jails/localhost 1.13G 790G 1.13G /usr/local/poudriere/jails/localhost > zroot/poudriere/jails/testvm 1.14G 790G 1.13G /usr/local/poudriere/jails/testvm > zroot/poudriere/ports 891M 790G 96K /zroot/poudriere/ports > zroot/poudriere/ports/testing 891M 790G 891M /usr/local/poudriere/ports/testing > zroot/usr 22.1G 790G 96K /usr > zroot/usr/home 13.5G 790G 13.5G /usr/home > zroot/usr/home/tmp 144K 790G 144K /usr/home/void/tmp > zroot/usr/obj 3.83G 790G 3.83G /usr/obj > zroot/usr/ports 2.30G 790G 2.30G /usr/ports > zroot/usr/src 2.41G 790G 2.41G /usr/src > zroot/var 28.9M 790G 96K /var > zroot/var/audit 96K 790G 96K /var/audit > zroot/var/crash 96K 790G 96K /var/crash > zroot/var/log 27.8M 790G 27.8M /var/log > zroot/var/mail 688K 790G 688K /var/mail > zroot/var/tmp 112K 790G 112K /var/tmp > > thank you for looking at my query. > > -- > > > > Mmm. Your pool has a lot of space left. So that is good. About gstat / iostat, yes during the daily scan would be nice. The numbers outside of the daily scan can also help as a reference. NB: There were talks on the ML about vnode re-use problems. But I think that was under a much higher load on the FS. Like 20 find processes in parallel on millions of files. Like this: https://cgit.freebsd.org/src/commit/?id=054f45e026d898bdc8f974d33dd748937dee1d6b and https://cgit.freebsd.org/src/log/?qt=grep&q=vnode&showmsg=1 These improvements also ended up in 14. Regards, Ronald. ------=_Part_5322_450711279.1698410799210 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <html><head></head><body><br> <p><strong>Van:</strong> void <void@f-m.fm><br> <strong>Datum:</strong> vrijdag, 27 oktober 2023 14:30<br> <strong>Aan:</strong> freebsd-stable@freebsd.org<br> <strong>Onderwerp:</strong> Re: periodic daily takes a very long time to run (14-stable)</p> <blockquote style="padding-right: 0px; padding-left: 5px; margin-left: 5px; border-left: #000000 2px solid; margin-right: 0px"> <div class="MessageRFC822Viewer" id="P"> <div class="TextPlainViewer" id="P.P">Hi,<br> <br> On Fri, Oct 27, 2023 at 01:45:24PM +0200, Ronald Klop wrote:<br> <br> >Can you run "gstat" or "iostat -x -d 1" to see how busy your disk is? >And how much bandwidth is uses.<br> ><br> >The output of "zpool status", "zpool list" and "zfs list" can also >be interesting.<br> ><br> >ZFS is known to become slow when the zpool is full almost full.<br> <br> OK. It's just finished the periodic daily I wrote about initially<br> <br> # date && periodic daily && date<br> Fri Oct 27 10:12:23 BST 2023<br> Fri Oct 27 13:12:09 BST 2023<br> <br> so almost exactly 3 hrs.<br> <br> Regarding gstat/iostat - do you mean when periodic is running, not running,<br> both?<br> <br> Regarding space used:<br> <br> NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD<br> zroot 790G 93.6G 0B 96K 0B 93.6G<br> zroot/ROOT 790G 64.1G 0B 96K 0B 64.1G<br> <br> zpool status -v<br> <br> # zpool status -v<br> pool: zroot<br> state: ONLINE<br> scan: scrub repaired 0B in 03:50:52 with 0 errors on Sat Oct 21 20:53:27 2023<br> config:<br> <br> NAME STATE READ WRITE CKSUM<br> zroot ONLINE 0 0 0<br> da0p3.eli ONLINE 0 0 0<br> <br> errors: No known data errors<br> <br> # zpool list<br> NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT<br> zroot 912G 93.6G 818G - - 21% 10% 1.00x ONLINE -<br> <br> # zfs list<br> NAME USED AVAIL REFER MOUNTPOINT<br> zroot 93.6G 790G 96K /zroot<br> zroot/ROOT 64.1G 790G 96K none<br> zroot/ROOT/13.1-RELEASE-p4_2022-12-01_063800 8K 790G 11.3G /<br> zroot/ROOT/13.1-RELEASE-p5_2023-02-03_232552 8K 790G 27.4G /<br> zroot/ROOT/13.1-RELEASE-p5_2023-02-09_153529 8K 790G 27.9G /<br> zroot/ROOT/13.1-RELEASE-p6_2023-02-18_024922 8K 790G 33.4G /<br> zroot/ROOT/13.1-RELEASE_2022-11-17_165717 8K 790G 791M /<br> zroot/ROOT/default 64.1G 790G 14.4G /<br> zroot/distfiles 2.96G 790G 2.96G /usr/ports/distfiles<br> zroot/postgres 96K 790G 96K /var/db/postgres<br> zroot/poudriere 4.17G 790G 104K /zroot/poudriere<br> zroot/poudriere/jails 3.30G 790G 96K /zroot/poudriere/jails<br> zroot/poudriere/jails/140R-rpi2b 1.03G 790G 1.03G /usr/local/poudriere/jails/140R-rpi2b<br> zroot/poudriere/jails/localhost 1.13G 790G 1.13G /usr/local/poudriere/jails/localhost<br> zroot/poudriere/jails/testvm 1.14G 790G 1.13G /usr/local/poudriere/jails/testvm<br> zroot/poudriere/ports 891M 790G 96K /zroot/poudriere/ports<br> zroot/poudriere/ports/testing 891M 790G 891M /usr/local/poudriere/ports/testing<br> zroot/usr 22.1G 790G 96K /usr<br> zroot/usr/home 13.5G 790G 13.5G /usr/home<br> zroot/usr/home/tmp 144K 790G 144K /usr/home/void/tmp<br> zroot/usr/obj 3.83G 790G 3.83G /usr/obj<br> zroot/usr/ports 2.30G 790G 2.30G /usr/ports<br> zroot/usr/src 2.41G 790G 2.41G /usr/src<br> zroot/var 28.9M 790G 96K /var<br> zroot/var/audit 96K 790G 96K /var/audit<br> zroot/var/crash 96K 790G 96K /var/crash<br> zroot/var/log 27.8M 790G 27.8M /var/log<br> zroot/var/mail 688K 790G 688K /var/mail<br> zroot/var/tmp 112K 790G 112K /var/tmp<br> <br> thank you for looking at my query.<br> <br> -- <br> </div> <hr></div> </blockquote> <br> <br> Mmm. Your pool has a lot of space left. So that is good.<br> <br> About gstat / iostat, yes during the daily scan would be nice. The numbers outside of the daily scan can also help as a reference.<br> <br> NB: There were talks on the ML about vnode re-use problems. But I think that was under a much higher load on the FS. Like 20 find processes in parallel on millions of files. Like this: https://cgit.freebsd.org/src/commit/?id=054f45e026d898bdc8f974d33dd748937dee1d6b and https://cgit.freebsd.org/src/log/?qt=grep&q=vnode&showmsg=1<br> These improvements also ended up in 14.<br> <br> Regards,<br> Ronald.<br> </body></html> ------=_Part_5322_450711279.1698410799210--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2146377145.5323.1698410799235>