Date: Sun, 07 Feb 2021 16:50:49 +0100 From: Walter von Entferndt <walter.von.entferndt@posteo.net> To: freebsd-performance@freebsd.org Subject: Re: Tuning and monitoring write intensive server Message-ID: <2002412.uJW0cDvVUg@t450s.local.lan> In-Reply-To: <mailman.51.1612699204.90685.freebsd-performance@freebsd.org> References: <mailman.51.1612699204.90685.freebsd-performance@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
At Sonntag, 7. Februar 2021, 13:00:04 CET Vladilen Kozin <vladilen.kozin@gmail.com> wrote: > [1 dedicated disk/ufs per thread, no redundency,...] RTFM tuning(7), zpool(8), zfs(8), gjournal(8), gstripe(8), gsched(8). - Obviously striping the disks will be beneficial, but it seems you don't want that (not enough disks?) & know what you're doing. I suppose your special task can tolerate data loss intentionally (no redundency). - Having the intent log on a dedicated, fast medium (SSD or NVD) would gain performance. Either ZFS can do that, or you can use gjournal(8). - Inserting an I/O scheduler might improve performance, too (gsched(8)). Yes, UFS is likely faster than ZFS on such a setup, but ZFS offers many advantages in terms of administration, fault tolerance & reliability. You can fetch my scripts to insert the scheduler (rc(8) script) & fs_summarize.awk to estimate the parameters for newfs(8) from the forums in the thread "Useful scripts". I.e. run the AWK script on some samples of your working data, then adjust the appropiate knobs to newfs(8). Note that ZFS automagically adjusts to the I/O chunk size. To monitor the I/O, use systat(1). Additionally, you can find a plethora of ports(7) for this, use psearch(1) or portfind(1) (install 1st). -- =|o) "Stell' Dir vor es geht und keiner kriegt's hin." (Wolfgang Neuss)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2002412.uJW0cDvVUg>