Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 27 Jul 2016 12:44:51 -0700
From:      Adrian Chadd <adrian.chadd@gmail.com>
To:        John Baldwin <jhb@freebsd.org>
Cc:        "src-committers@freebsd.org" <src-committers@freebsd.org>,  "svn-src-all@freebsd.org" <svn-src-all@freebsd.org>,  "svn-src-head@freebsd.org" <svn-src-head@freebsd.org>
Subject:   Re: svn commit: r303405 - in head/sys/dev/cxgbe: . tom
Message-ID:  <CAJ-Vmon=jEhT6MCKs0=vQjzV9HvwD-oH-qRRXVCGdNVpd721Yg@mail.gmail.com>
In-Reply-To: <3422795.rot3cCl2OH@ralph.baldwin.cx>
References:  <201607271829.u6RITZlx041710@repo.freebsd.org> <3422795.rot3cCl2OH@ralph.baldwin.cx>

next in thread | previous in thread | raw e-mail | index | archive | help
[snip]

When we had my kqueue sendfile stuff in tree to handle notifications,
I was getting 40g across what, 64k sockets on 8 cores using SHM
sendfile + kqueue sendfile completion. it worked pretty well. One core
could get 40g if I pre-seeded enough data into it (ie, multiple
sendfile transactions on a socket) so I kept the socket buffer full.
Otherwise there'd be dead time where the socket buffer was empty.

It was very sensitive to the TCP segment size - ie, if for some reason
the TCP TX path passed up less than 32k chunks, it would end up
chewing way too much CPU in tcp_output(). That was very sensitive to
the TX write()  size and latency to the receiver. It wasn't running
out of data either; it was just some side effect of how big the writes
were and how quickly the TX socket buffer was being topped off. I ..
well, never got to finish instrumenting the full relationship going on
there (between TSO and buffer sizes/latency) to see if that could be
better engineered so we always kept the TSO full if we could.



-adrian



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-Vmon=jEhT6MCKs0=vQjzV9HvwD-oH-qRRXVCGdNVpd721Yg>