Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Oct 2012 16:00:01 +0300
From:      Nikolay Denev <ndenev@gmail.com>
To:        Ivan Voras <ivoras@freebsd.org>
Cc:        "freebsd-hackers@freebsd.org Hackers" <freebsd-hackers@freebsd.org>, Rick Macklem <rmacklem@uoguelph.ca>
Subject:   Re: NFS server bottlenecks
Message-ID:  <C10B14C4-943E-47CC-B6A7-4596A2D11D73@gmail.com>
In-Reply-To: <CAF-QHFWY0drcrUpo7GGD1zQNSDWsEeB_LHAjEbUKrX2ovQHNxw@mail.gmail.com>
References:  <937460294.2185822.1350093954059.JavaMail.root@erie.cs.uoguelph.ca> <302BF685-4B9D-49C8-8000-8D0F6540C8F7@gmail.com> <k5gtdh$nc0$1@ger.gmane.org> <0857D79A-6276-433F-9603-D52125CF190F@gmail.com> <CAF-QHFUU0hhtRNK1_p9zks2w%2Be22bfWOtv%2BXaqgFqTiURcJBbQ@mail.gmail.com> <6DAAB1E6-4AC7-4B08-8CAD-0D8584D039DE@gmail.com> <23D7CB3A-BD66-427E-A7F5-6C9D3890EE1B@gmail.com> <CAF-QHFWY0drcrUpo7GGD1zQNSDWsEeB_LHAjEbUKrX2ovQHNxw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Oct 20, 2012, at 3:11 PM, Ivan Voras <ivoras@freebsd.org> wrote:

> On 20 October 2012 13:42, Nikolay Denev <ndenev@gmail.com> wrote:
>=20
>> Here are the results from testing both patches : =
http://home.totalterror.net/freebsd/nfstest/results.html
>> Both tests ran for about 14 hours ( a bit too much, but I wanted to =
compare different zfs recordsize settings ),
>> and were done first after a fresh reboot.
>> The only noticeable difference seems to be much more context switches =
with Ivan's patch.
>=20
> Thank you very much for your extensive testing!
>=20
> I don't know how to interpret the rise in context switches; as this is
> kernel code, I'd expect no context switches. I hope someone else can
> explain.
>=20
> But, you have also shown that my patch doesn't do any better than
> Rick's even on a fairly large configuration, so I don't think there's
> value in adding the extra complexity, and Rick knows NFS much better
> than I do.
>=20
> But there are a few things other than that I'm interested in: like why
> does your load average spike almost to 20-ties, and how come that with
> 24 drives in RAID-10 you only push through 600 MBit/s through the 10
> GBit/s Ethernet. Have you tested your drive setup locally (AESNI
> shouldn't be a bottleneck, you should be able to encrypt well into
> Gbyte/s range) and the network?
>=20
> If you have the time, could you repeat the tests but with a recent
> Samba server and a CIFS mount on the client side? This is probably not
> important, but I'm just curious of how would it perform on your
> machine.

The first iozone local run finished, I'll paste just the result here, =
and also the same test over NFS for comparison:
(This is iozone doing 8k sized IO ops, on ZFS dataset with =
recordsize=3D8k)

NFS:
                                                            random  =
random    bkwd   record   stride                                  =20
              KB  reclen   write rewrite    read    reread    read   =
write    read  rewrite     read                                  =20
        33554432       8    4973    5522     2930     2906    2908    =
3886                                         =20

Local:
                                                            random  =
random    bkwd   record   stride                                  =20
              KB  reclen   write rewrite    read    reread    read   =
write    read  rewrite     read                                  =20
        33554432       8   34740   41390   135442   142534   24992   =
12493                                         =20


P.S.: I forgot to mention that the network is with 9K mtu.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?C10B14C4-943E-47CC-B6A7-4596A2D11D73>