Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 24 Jan 2014 19:10:00 -0500 (EST)
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        J David <j.david.lists@gmail.com>
Cc:        freebsd-net@freebsd.org
Subject:   Re: Terrible NFS performance under 9.2-RELEASE?
Message-ID:  <179007387.16041087.1390608600325.JavaMail.root@uoguelph.ca>
In-Reply-To: <CABXB=RTTCfxP_Ebp3aa4k9qr5QrGDVQQMr1R1w0wBTUBD1OtwA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
J David wrote:
> On Fri, Jan 24, 2014 at 5:54 PM, Rick Macklem <rmacklem@uoguelph.ca>
> wrote:
> > But disabling it will identify if that is causing the problem. And
> > it
> > is a workaround that often helps people get things to work. (With
> > real
> > hardware, there may be no way to "fix" such things, depending on
> > the
> > chipset, etc.)
> 
> There are two problems that are crippling NFS performance with large
> block sizes.
> 
> One is the extraneous NFS read-on-write issue I documented earlier
> today that has nothing to do with network topology or packet size.
> You might have more interest in that one.
> 
Afraid not. Here is the commit message for the commit where the read
before partial write was added. It is r46349 dated May 2, 1999.
(As you will see, there is a lot to this and I am not the guy to try
 and put it back the old way without breaking anything.)

The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS.  These hacks have caused no
end of trouble, especially when combined with mmap().  I've removed
them.  Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write.  NFS does, however,
optimize piecemeal appends to files.  For most common file operations,
you will not notice the difference.  The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations.  NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write.  There is quite a bit of room for further
optimization in these areas.

The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault.  This
is not correct operation.  The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid.  A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid.  This operation is
necessary to properly support mmap().  The zeroing occurs most often
when dealing with file-EOF situations.  Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.

getblk() and allocbuf() have been rewritten.  B_CACHE operation is now
formally defined in comments and more straightforward in
implementation.  B_CACHE for VMIO buffers is based on the validity of
the backing store.  B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa).  biodone() is now responsible for setting B_CACHE
when a successful read completes.  B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated.  VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE.  This means that bowrite() and bawrite() also
set B_CACHE indirectly.

There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount.  These have been fixed.  getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.

Major fixes to NFS/TCP have been made.  A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain.  The server's kernel must be
recompiled to get the benefit of the fixes.

Submitted by:	Matthew Dillon <dillon@apollo.backplane.com>

I would like to hear if you find Linux doing read before write when
you use "-r 2k", since I think that is writing less than a page.

> This other thing is a five-way negative interaction between 64k NFS,
> TSO, LRO, delayed ack, and congestion control.  Disabling *any* one
> of
> them is sufficient to see significant improvement, but does not serve
> to identify that it is causing the problem since it is not a unique
> characterstic.  (Even if it was, that would not determine whether a
> problem was with component X or with component Y's ability to
> interact
> with component X.)  Figuring out what's really happening has proven
> very difficult for me, largely due to my limited knowledge of these
> areas.  And the learning curve on the TCP code is pretty steep.
> 
> The "simple" explanation appears to be that NFS generates two
> packets,
> one just under 64k and one containing "the rest" and the alternating
> sizes prevent the delayed ack code from ever seeing two full-size
> segments in a row, so traffic gets pinned down to one packet per
> net.inet.tcp.delacktime (100ms default), for 10pps, as observed
> earlier.  But unfortunately, like a lot of simple explanations, this
> one appears to have the disadvantage of being more or less completely
> wrong.
> 
This simple explanation sounds interesting to me. Have you tried a
64K test with the delayed ACK disabled entirely by setting
net.inet.tcp.delayed_ack=0. (I thought someone mentioned that the
ACK was only delayed when there wasn`t any data to send, but I
may be wrong.)

Also, I believe that the above is specific to the virtio driver
(or possibly others that handle a 64K MTU). I`m afraid that
various issues will pop up (like the one I pointed out where
disabling TSO was the `magic bullet`) for different network
interfaces.

Glebius, J David is using the virtio driver to do NFS perf. testing
and gets very poor performance when the rsize, wsize is 64K. I think
he might be willing to send you a packet capture of this, if you think
it might explain what might be going on and whether changing something
in the virtio network driver might help.

Thanks, rick

> > ps: If you had looked at the link I had in the email, you would
> > have
> >     seen that he gets very good performance once he disables TSO.
> >     As
> >     they say, your mileage may vary.
> 
> Pretty much every word written on this subject has come across my
> screens at this point.  "Very good performance" is relative.  Yes,
> you
> can get about 10-20x better performance by disabling TSO, at the
> expense of using vastly more CPU.  Which is definitely a big
> improvement, and may be sufficient for many applications.  But in
> absolute terms, the overall performance and particularly the
> efficiency remains unsatisfactory.
> 
> Thanks!
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to
> "freebsd-net-unsubscribe@freebsd.org"
> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?179007387.16041087.1390608600325.JavaMail.root>