Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Oct 2012 14:11:10 +0200
From:      Ivan Voras <ivoras@freebsd.org>
To:        Nikolay Denev <ndenev@gmail.com>
Cc:        "freebsd-hackers@freebsd.org Hackers" <freebsd-hackers@freebsd.org>, Rick Macklem <rmacklem@uoguelph.ca>
Subject:   Re: NFS server bottlenecks
Message-ID:  <CAF-QHFWY0drcrUpo7GGD1zQNSDWsEeB_LHAjEbUKrX2ovQHNxw@mail.gmail.com>
In-Reply-To: <23D7CB3A-BD66-427E-A7F5-6C9D3890EE1B@gmail.com>
References:  <937460294.2185822.1350093954059.JavaMail.root@erie.cs.uoguelph.ca> <302BF685-4B9D-49C8-8000-8D0F6540C8F7@gmail.com> <k5gtdh$nc0$1@ger.gmane.org> <0857D79A-6276-433F-9603-D52125CF190F@gmail.com> <CAF-QHFUU0hhtRNK1_p9zks2w%2Be22bfWOtv%2BXaqgFqTiURcJBbQ@mail.gmail.com> <6DAAB1E6-4AC7-4B08-8CAD-0D8584D039DE@gmail.com> <23D7CB3A-BD66-427E-A7F5-6C9D3890EE1B@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 20 October 2012 13:42, Nikolay Denev <ndenev@gmail.com> wrote:

> Here are the results from testing both patches : http://home.totalterror.net/freebsd/nfstest/results.html
> Both tests ran for about 14 hours ( a bit too much, but I wanted to compare different zfs recordsize settings ),
> and were done first after a fresh reboot.
> The only noticeable difference seems to be much more context switches with Ivan's patch.

Thank you very much for your extensive testing!

I don't know how to interpret the rise in context switches; as this is
kernel code, I'd expect no context switches. I hope someone else can
explain.

But, you have also shown that my patch doesn't do any better than
Rick's even on a fairly large configuration, so I don't think there's
value in adding the extra complexity, and Rick knows NFS much better
than I do.

But there are a few things other than that I'm interested in: like why
does your load average spike almost to 20-ties, and how come that with
24 drives in RAID-10 you only push through 600 MBit/s through the 10
GBit/s Ethernet. Have you tested your drive setup locally (AESNI
shouldn't be a bottleneck, you should be able to encrypt well into
Gbyte/s range) and the network?

If you have the time, could you repeat the tests but with a recent
Samba server and a CIFS mount on the client side? This is probably not
important, but I'm just curious of how would it perform on your
machine.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAF-QHFWY0drcrUpo7GGD1zQNSDWsEeB_LHAjEbUKrX2ovQHNxw>