Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 7 Aug 2011 18:47:46 -0400 (EDT)
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        FreeBSD FS <freebsd-fs@freebsd.org>
Cc:        onwahe@gmail.com
Subject:   NFS calculation of max commit size
Message-ID:  <1687823014.1491995.1312757266327.JavaMail.root@erie.cs.uoguelph.ca>

next in thread | raw e-mail | index | archive | help
A recent PR (kern/159351) noted that the following
calculation results in a divide-by-zero when
desiredvnodes < 1000.

	nmp->nm_wcommitsize = hibufspace / (desiredvnodes / 1000);

Just fixing the divide-by-zero is easy enough, but I'm not
sure what this calculation is trying to do. Making it a fraction
of "hibufspace" makes sense (nm_wcommitsize is the maximum # of
bytes of uncommitted data in the NFS client's buffer cache blocks,
if I understand it correctly), but why divide it by

                (desiredvnodes / 1000) ??

Maybe thinking that fewer vnodes means sharing it with fewer
other file systems or ???

Anyhow, it seems to me that the formulae is bogus for small
values of desiredvnodes (for example desiredvnodes == 1500
implies nm_wcommitsize == hibufspace, which sounds too large
to me).

I'm thinking that putting an upper limit of 10% of hibufspace
might make sense. ie. Change the above to:

	if (desiredvnodes >= 11000)
		nmp->nm_wcommitsize = hibufspace / (desiredvnodes / 1000);
	else
		nmp->nm_wcommitsize = hibufspace / 10;

Anyone have comments or insight into this calculation?

rick
ps: jhb, I hope you don't mind. I emailed you first and then
    thought others might have some ideas, too.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1687823014.1491995.1312757266327.JavaMail.root>