Date: Sun, 27 Nov 2005 01:27:38 -0800 From: Mike Eubanks <mse_software@charter.net> To: freebsd-stable@freebsd.org Subject: Re: NFS network load on 5.4-STABLE Message-ID: <1133083658.838.109.camel@yak.mseubanks.net> In-Reply-To: <43891EA5.2020206@mac.com> References: <1132964757.831.20.camel@yak.mseubanks.net> <43891EA5.2020206@mac.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, 2005-11-26 at 21:49 -0500, Chuck Swiger wrote: > Mike Eubanks wrote: > > As soon as I mount my NFS file systems, the network load increases to a > > constant 80%-90% of network bandwidth, even when the file systems are > > not in use. NFS stats on the client machine (nfsstat -c) produce the > > following: > [ ... ] > > Fsstat and Requests are increasing very rapidly. Both the client and > > server are i386 5.4-STABLE machines. Is this behaviour normal? > > Sort of. Some fancy parts of X like file-manager/exporer applications tend to > call fstat() a lot, but it's probably tunable, and if you enable NFS attribute > caching that will help a lot. Thank you for the reply Chuck. It seems that it is something to do with Gnome. I haven't done an upgrade to 2.12 yet, but the difference did happen when I refreshed my user configuration to remove any stale config files. Using the "top -mio" command I get the following: VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND 38 56 0 0 0 0 0.00% libgtop_server 94 16 0 0 0 0 0.00% Xorg 4 0 0 0 0 0 0.00% top 0 0 0 0 0 0 0.00% mozilla-bin 115 40 0 0 0 0 0.00% multiload-appl 42 1 0 0 0 0 0.00% anjuta-bin 0 0 0 0 0 0 0.00% evolution-2.2 130 9 0 0 0 0 0.00% gnome-terminal 15 10 0 0 0 0 0.00% clock-applet 42 0 0 0 0 0 0.00% mixer_applet2 10 0 0 0 0 0 0.00% metacity 3 0 0 0 0 0 0.00% nautilus 4 0 0 0 0 0 0.00% wnck-applet When I unmount the NFS share, the involuntary context switches drop to nearly 0 and the voluntary context switches drop significantly. Other than that, everything else stayed at 0. I have dumped the traffic on the network adapter in question. With abbreviated host names, there are miles of the following. +---- file-manager/explorer? | client.220312819 > server.nfs: 96 fsstat [|nfs] server.nfs > client.220312819: reply ok 168 fsstat POST: DIR 755 ids 1001/0 [|nfs] client.220312820 > server.nfs: 96 fsstat [|nfs] server.nfs > client.220312820: reply ok 168 fsstat POST: DIR 755 ids 1001/0 [|nfs] client.220312821 > server.nfs: 96 fsstat [|nfs] server.nfs > client.220312821: reply ok 168 fsstat POST: DIR 755 ids 0/0 [|nfs] client.220312822 > server.nfs: 96 fsstat [|nfs] server.nfs > client.220312822: reply ok 168 fsstat POST: DIR 755 ids 0/0 [|nfs] client.220312823 > server.nfs: 96 fsstat [|nfs] server.nfs > client.220312823: reply ok 168 fsstat POST: DIR 755 ids 0/0 [|nfs] If this is enough evidence for the file-manager/explore, I'll just have to accept it for now. I can't find anything about tuning them. As far as attribute caching, do you mean the `-o ac*' options to mount_nfs? I also noticed two sysctl values, although, I left them unmodified. vfs.nfs.access_cache_timeout: 2 vfs.nfs4.access_cache_timeout: 60 > "ls /afs", if available, is a wonderful test of > whether a program/file-manager is being polite. I better read a book on this first if you're talking about the Andrew File System. Any suggestions? > > Anyway, "top -mio" is likely to be informative. > -- Mike Eubanks <mse_software@charter.net>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1133083658.838.109.camel>