Date: Wed, 22 Jan 2014 01:41:34 -0600 (CST) From: Bryan Venteicher <bryanv@daemoninthecloset.org> To: Eric Dombroski <eric@edombroski.com> Cc: freebsd-stable@freebsd.org Subject: Re: Major performance/stability regression in virtio network drivers between 9.2-RELEASE and 10.0-RC5 Message-ID: <2121752681.4348.1390376494793.JavaMail.root@daemoninthecloset.org> In-Reply-To: <CA%2B=CMd1T0=BXe9a=VqMc5cFkEvob8jbv8d9rV2%2BSnha4hfOj1Q@mail.gmail.com> References: <CA%2B=CMd3jeNevdzMQTCG5hEE91Tnmy=9VKfSOdsJaiqo7jYTvJg@mail.gmail.com> <CA%2B=CMd1T0=BXe9a=VqMc5cFkEvob8jbv8d9rV2%2BSnha4hfOj1Q@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi, ----- Original Message ----- > PR: http://www.freebsd.org/cgi/query-pr.cgi?pr=185864 > > There's a couple things going on here. First, we're spending an absurd amount of time in bpf [1], specifically when getting the time value in acpi_timer_get_timecount(). This amounts to nearly 25% of the profiled run. I'm guessing that a filter is using BPF_TSTAMP_NORMAL in 10.0, where as it was using BPF_TSTAMP_FAST in 9.2. Are you using DHCP on the vtnetX interface? Can you use a static IP instead (and perhaps also even remove `device bpf` from your kernel config to be sure)? Second, the rate of Tx completion interrupts much higher than I recall. I'm still thinking about this. [1] - http://people.freebsd.org/~bryanv/vtnet/vtnet-bpf-10.svg Bryan > On Sat, Jan 18, 2014 at 1:51 PM, Eric Dombroski <eric@edombroski.com> wrote: > > > Hello: > > > > I believe there is a major performance regression between FreeBSD > > 9.2-RELEASE and 10.0-RC5 involving the virtio network drivers (vtnet) and > > handling incoming traffic. Below are the results of some iperf tests and > > large dd operations over NFS. Write throughput goes from ~40Gbps to > > ~2.4Gbps from 9.2 to 10.0RC5, and over time the connection becomes unstable > > ("no buffer space available"), requiring the interface to be taken down/up. > > > > > > These results are on fresh installs of 9.2 and 10.0RC5, no sysctl tweaks > > on either system. > > > > I can't reproduce this using an Intel 1Gbps ethernet through PCIe > > passthrough, although I suspect the problem manifests itself over 1Gbps > > speeds anyway. > > > > Tests: > > > > Client (host): > > root@gogo:~# uname -a > > Linux gogo 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux > > root@gogo:~# kvm -version > > QEMU emulator version 1.1.2 (qemu-kvm-1.1.2+dfsg-6, Debian), Copyright > > (c) 2003-2008 Fabrice Bellard > > root@gogo:~# lsmod | grep vhost > > vhost_net 27436 3 > > tun 18337 8 vhost_net > > macvtap 17633 1 vhost_net > > > > > > Command: iperf -c 192.168.100.x -t 60 > > > > > > Server (FreeBSD 9.2 VM): > > > > root@umarotest:~ # uname -a > > FreeBSD umarotest 9.2-RELEASE-p3 FreeBSD 9.2-RELEASE-p3 #0: Sat Jan > > 11 03:25:02 UTC 2014 > > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC > > amd64 > > root@umarotest:~ # iperf -s > > ------------------------------------------------------------ > > Server listening on TCP port 5001 > > TCP window size: 64.0 KByte (default) > > ------------------------------------------------------------ > > [ 4] local 192.168.100.44 port 5001 connected with 192.168.100.1 > > port 58996 > > [ ID] Interval Transfer Bandwidth > > [ 4] 0.0-60.0 sec 293 GBytes 41.9 Gbits/sec > > [ 5] local 192.168.100.44 port 5001 connected with 192.168.100.1 > > port 58997 > > [ 5] 0.0-60.0 sec 297 GBytes 42.5 Gbits/sec > > [ 4] local 192.168.100.44 port 5001 connected with 192.168.100.1 > > port 58998 > > [ 4] 0.0-60.0 sec 291 GBytes 41.6 Gbits/sec > > [ 5] local 192.168.100.44 port 5001 connected with 192.168.100.1 > > port 58999 > > [ 5] 0.0-60.0 sec 297 GBytes 42.6 Gbits/sec > > [ 4] local 192.168.100.44 port 5001 connected with 192.168.100.1 > > port 59000 > > [ 4] 0.0-60.0 sec 297 GBytes 42.5 Gbits/sec > > > > While pinging out from the server to the client, I do not get any > > errors. > > > > > > root@umaro:~ # uname -a FreeBSD umaro 10.0-RC5 FreeBSD 10.0-RC5 #0 > > r260430: Wed Jan 8 05:10:04 UTC 2014 > > root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC > > amd64 > > root@umaro:~ # iperf -s > > ------------------------------------------------------------ > > Server listening on TCP port 5001 > > TCP window size: 64.0 KByte (default) > > ------------------------------------------------------------ > > [ 4] local 192.168.100.5 port 5001 connected with 192.168.100.1 > > port 50264 > > [ ID] Interval Transfer Bandwidth > > [ 4] 0.0-60.0 sec 16.7 GBytes 2.39 Gbits/sec > > [ 5] local 192.168.100.5 port 5001 connected with 192.168.100.1 > > port 50265 > > [ 5] 0.0-60.0 sec 18.3 GBytes 2.62 Gbits/sec > > [ 4] local 192.168.100.5 port 5001 connected with 192.168.100.1 > > port 50266 > > [ 4] 0.0-60.0 sec 16.8 GBytes 2.40 Gbits/sec > > [ 5] local 192.168.100.5 port 5001 connected with 192.168.100.1 > > port 50267 > > [ 5] 0.0-60.0 sec 16.8 GBytes 2.40 Gbits/sec > > [ 4] local 192.168.100.5 port 5001 connected with 192.168.100.1 > > port 50268 > > [ 4] 0.0-60.0 sec 16.8 GBytes 2.41 Gbits/sec > > > > *** While pinging out from the server to client, frequent "ping: > > sendto: No space left on device" errors *** > > > > > > After a while, I can also reliably re-produce more egregious "ping: > > sendto: No buffer space available" errors after doing a large sequential > > write over NFS: > > > > mount -t nfs -o rsize=65536,wsize=65536 192.168.100.5:/storage/shared > > /mnt/nfs > > dd if=/dev/zero of=/mnt/nfs/testfile bs=1M count=30000 > > > > I am going to file a freebsd bug report as well. > > > > Thanks, > > Eric > > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2121752681.4348.1390376494793.JavaMail.root>