Date: Sun, 8 Feb 2009 07:50:03 GMT From: Martin Birgmeier <martin@email.aon.at> To: freebsd-fs@FreeBSD.org Subject: Re: kern/131360: [nfs] poor scaling behavior of the NFS server under load Message-ID: <200902080750.n187o3kl026625@freefall.freebsd.org>
next in thread | raw e-mail | index | archive | help
The following reply was made to PR kern/131360; it has been noted by GNATS. From: Martin Birgmeier <martin@email.aon.at> To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/131360: [nfs] poor scaling behavior of the NFS server under load Date: Sun, 8 Feb 2009 08:40:31 +0100 (CET) Yet more info... here is output from top. Also, the following just happened: - I am editing this mail on the NFS server. - together with the top output from below, I was pasting a total of 1000 lines (my XTerm scroll size). - This caused the load on this server to effectively double again (over the pasted values shown below). Basically, I can only continue editing this mail if I suspend the build on the client machine, in which case the server immediately becomes responsive again. So maybe it is not a pppoa interaction with NFS serving, but any load on the server + NFS server makes the load on the server go to insane values. Or may be it is just additional TCP load, because I am displaying this XTerm on the NFS client (where the X server is running), and all the pasting has to go via the X server's TCP connection. Also, I have the impression that as long as only one of the 8 nfsd's on the server is busy, things are mostly normal, but as soon as more than one starts doing work (as seen in the output below), the load on the server goes way up. And regarding "mostly normal": even if only one nfsd seems to be active, the load on the server is already close to one - assuming that an nfsd does not do much more than network and disk i/o this really should not be the case (and was not under 6.3, where the load was low even under quite heavy NFS i/o). So maybe it is a ULE problem, after all? last pid: 2527; load averages: 14.71, 10.36, 6.13 up 0+01:04:43 08:21:08 111 processes: 9 running, 102 sleeping CPU: 1.4% user, 0.0% nice, 90.5% system, 8.1% interrupt, 0.0% idle Mem: 135M Active, 745M Inact, 119M Wired, 1012K Cache, 112M Buf, 248M Free Swap: 2048M Total, 2048M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 971 root 1 4 0 3128K 944K - 13:45 40.28% nfsd 972 root 1 4 0 3128K 944K - 2:19 15.09% nfsd 973 root 1 4 0 3128K 944K - 1:31 10.16% nfsd 974 root 1 4 0 3128K 944K - 1:03 6.05% nfsd 975 root 1 4 0 3128K 944K - 0:49 4.59% nfsd 977 root 1 4 0 3128K 944K - 0:41 3.56% nfsd 978 root 1 4 0 3128K 944K - 0:35 2.64% nfsd 976 root 1 4 0 3128K 944K - 0:31 1.81% nfsd 2527 root 1 96 0 3164K 992K RUN 0:00 1.54% rsh 1471 root 1 81 -15 5032K 2716K select 0:05 0.05% ppp 919 root 1 96 0 3128K 3148K select 2:16 0.00% amd 1539 root 1 96 0 6508K 4964K RUN 0:10 0.00% xterm 1140 squid 1 4 0 12000K 10152K sbwait 0:05 0.00% perl5.8.9 1130 squid 1 96 0 15660K 10820K RUN 0:05 0.00% squid 1141 squid 1 4 0 12000K 10148K sbwait 0:04 0.00% perl5.8.9 1142 squid 1 4 0 12000K 10148K sbwait 0:04 0.00% perl5.8.9 1143 squid 1 4 0 12000K 10104K sbwait 0:03 0.00% perl5.8.9
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200902080750.n187o3kl026625>