Date: Wed, 29 Dec 1999 18:53:24 -0800 (PST) From: Tom <tom@uniserve.com> To: freebsd-stable@freebsd.org, freebsd-hackers@freebsd.org Subject: softupdates and debug.max_softdeps Message-ID: <Pine.BSF.4.02A.9912291838230.25423-100000@shell.uniserve.ca>
next in thread | raw e-mail | index | archive | help
I'm trying to find some information on reasonable settings for debug.max_softdeps on a recent FreeBSD-stable system. It seems that if you have a machine that is able to generate disk IO much faster than can be handled, has a large amount of RAM (and therefore debug.max_softdeps is large), and the filesystem is very large (about 80GB), filesystem metadata updates can get _very_ far behind. For instance, on a test system running 4 instances of postmark continuously for 24 hours, "df" reports that 40 GB of disk space is being used, even though only about 5 GB is actually used. If I kill the postmark processes, the metadata is eventually dribbled out and "df" reports 5GB in use. It takes about 20 minutes for the metadata to be updated on a completely ideal system. On this particular system, it doesn't seem to stabilize either. If the 4 postmark instances are allowed to run, disk usage seems to climb indefinitely (at 40GB it was still climbing), until eventually the machine silently reboots. debug.max_softdeps is by default set to 523,712 (1 GB of RAM). Is that a resonable value? I see some tests in the docs with max_softdeps set to 4000 or so. Tom To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-stable" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.02A.9912291838230.25423-100000>