Date: Wed, 28 Aug 2013 21:13:50 -0400 From: "Sam Fourman Jr." <sfourman@gmail.com> To: Eric Browning <ericbrowning@skaggscatholiccenter.org> Cc: FreeBSD FS <freebsd-fs@freebsd.org> Subject: Re: NFS on ZFS pure SSD pool Message-ID: <CAOFF%2BZ3jNvnWbE5C8LarRE7-SQbB4Q9NJTn01J=FGP9uppKdaw@mail.gmail.com> In-Reply-To: <CAM=5oeAWbV1wzscnTHHH1=FFQ6DYjB%2BpriGowe1WcBfM=SsPXg@mail.gmail.com> References: <CAM=5oeDBJKo0qfcpaeUn6DJKB3WzxH0yhoK0U304P6S1tB3D-A@mail.gmail.com> <2008996797.14358576.1377631792358.JavaMail.root@uoguelph.ca> <CAM=5oeAWbV1wzscnTHHH1=FFQ6DYjB%2BpriGowe1WcBfM=SsPXg@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Aug 28, 2013 at 2:27 PM, Eric Browning < ericbrowning@skaggscatholiccenter.org> wrote: > Rick, > > Sam and I applied the patch (kernel now at r254983M) and set > vfs.nfsd.tcphighwater=5000 > in sysctl.conf and my CPU is still slammed. SHould I up it to 10000? > > > Hello, list I am helping Eric debug and test this situation as much as I can. So to clarify and recap, here is the situation: This is a production setting, in a school, that has 200+ students using a mix of systems,with the primary client being OSX 10.8. and the primary function is using NFS. from what I can see there should be plenty of disk I/O these are Intel SSD disks.. The server is running FreeBSD 9-STABLE r254983 (we patched it last night) with this patch http://people.freebsd.org/~rmacklem/drc4-stable9.patch Here is a full dmesg for reference (it states FreeBSD 9.1,but we have since upgraded and applied the above patch) https://gist.github.com/sfourman/6373059 The main problem is we need better performance from NFS, but it would appear the server is starved for CPU cycles.... With only a few clients the server is lightning fast but with 25 users logging in this morning (students in class) the server went right to 1200% CPU load and about 300% more going to "intr" and it pretty much stayed there all day until they logged out between classes. So that works out to be somewhere between 2 to 4 users per core during today's classes, different settings for vfs.nfsd.tcphighwater were tested various ranges from 5,000 up to 50,000 were used while a load was present, but the processor load didn't change. Garrett stated that he tried values in upwards of 100,000... this can be tested tomorrow It would be helpful if we could get some direction, on other things we might try tomorrow. one idea is, the server has several igb Ethernet interfaces with 8 queue's per interface is it worth forcing the interfaces down to one queue? Is NFS even setup to understand multi queue network devices? or doesn't it matter? Any thoughts are appreciated -- Sam Fourman Jr.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOFF%2BZ3jNvnWbE5C8LarRE7-SQbB4Q9NJTn01J=FGP9uppKdaw>