Date: Wed, 6 Apr 2022 23:14:51 +0700 From: Eugene Grosbein <eugen@grosbein.net> To: egoitz@ramattack.net, freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org, Freebsd performance <freebsd-performance@freebsd.org> Subject: Re: Desperate with 870 QVO and ZFS Message-ID: <ca3f86f2-94a1-be94-ad55-7bd1c9bc50ab@grosbein.net> In-Reply-To: <28e11d7ec0ac5dbea45f9f271fc28f06@ramattack.net> References: <4e98275152e23141eae40dbe7ba5571f@ramattack.net> <665236B1-8F61-4B0E-BD9B-7B501B8BD617@ultra-secure.de> <0ef282aee34b441f1991334e2edbcaec@ramattack.net> <28e11d7ec0ac5dbea45f9f271fc28f06@ramattack.net>
next in thread | previous in thread | raw e-mail | index | archive | help
06.04.2022 22:30, egoitz@ramattack.net пишет: > One perhaps important note!! > > > When this happens... almost all processes appear in top in the following state: > > > txg state or > > txg-> > > bio.... > > > perhaps should the the vfs.zfs.dirty_data_max, vfs.zfs.txg.timeout, vfs.zfs.vdev.async_write_active_max_dirty_percent be increased, decreased.... I'm afraid of doing some chage ana finally ending up with an inestable server.... I'm not an expert in handling these values.... > > > Any recommendation?. 1) Make sure the pool has enough free space because ZFS can became crawling slow otherwise. 2) Increase recordsize upto 1MB for file systems located in the pool so ZFS is allowed to use bigger request sizes for read/write operations 3) If you use compression, look if achieved compressratio worth it and if not (<1.4 f.e.) then better disable compression to avoid its overhead; 4) try "zfs set redundant_metadata=most" to decrease amount of small writes to the file systems; 5) If you have good power supply and stable (non-crashing) OS, try increasing sysctl vfs.zfs.txg.timeout from defaule 5sec, but do not be extreme (f.e. upto 10sec). Maybe it will increase amount of long writes and decrease amount of short writes, that is good.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ca3f86f2-94a1-be94-ad55-7bd1c9bc50ab>