Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 7 Mar 2015 12:47:34 +0100
From:      Luigi Rizzo <rizzo@iet.unipi.it>
To:        Wei Hu <weh@microsoft.com>
Cc:        "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: Network interrupt and NAPI in FreeBSD?
Message-ID:  <CA%2BhQ2%2BhVMVbrWJFR6MYiMy239X7KbC7LvJop664mBiayHMA-_Q@mail.gmail.com>
In-Reply-To: <BY1PR0301MB09024C00FF371C7B9E57711EBB1D0@BY1PR0301MB0902.namprd03.prod.outlook.com>
References:  <BY1PR0301MB0902C944E2287E0C6B5D8287BB1C0@BY1PR0301MB0902.namprd03.prod.outlook.com> <CA%2BhQ2%2Bi6d_tsqKpkYVh2jnVQaS4TZ5X7zyh_54ZecQDjm%2BFkRA@mail.gmail.com> <BY1PR0301MB09024C00FF371C7B9E57711EBB1D0@BY1PR0301MB0902.namprd03.prod.outlook.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Mar 7, 2015 at 8:19 AM, Wei Hu <weh@microsoft.com> wrote:
> Many thanks, Luigi! We are measuring the network performance in VM(Hyper-=
V), using netvsc virtual NIC device and its own driver. The Linux VM also u=
ses the similar virtual device. The driver on both Linux and FreeBSD have T=
SO/LRO support. With just one network queue, we found the throughput is hig=
her on Linux (around 2.5 - 3 Gbps) than FreeBSD (just around 1.6 Gbps) with=
 10GB NIC. If INVARIANT option is disabled, FreeBSD can achieve 2 - 2.3 Gbp=
s. The much higher interrupt rate on FreeBSD was observed.
>
> Thanks for the all suggestions. Do you think netmap could help in this ca=
se?

netmap per se probably won't help in this case but it is often useful to
stress test the datapath and figure out where bottlenecks are.

In fact you could try and run the netmap test program, pkt-gen (in
tools/tools/netmap)
in emulated mode to see how many pps you can send or receive through
the netvsc interface.
You need to kldload the netmap module (it is already in by default in
recent generic)
and then do a
   pkt-gen -i hn0 -f tx # whatever the name is for the hyperv interface

I don't know what to expect but unoptimized qemu/kvm using the e1000 driver
was as low as 50-100Kpps, and optimized ones went up to 500Kpps and more
even without netmap (i believe FreeBSD's virtio is even faster now).


Obviously you should disable INVARIANT, but the numbers you cite (2-3 Gbps)
are really low, which suggests me that there might some performance problem
in the hypervisor itself.

I also do not see any TSO reference in the source code for the netvsc guest
driver in FreeBSD, so i am not sure whether it is really supported or not.

cheers
luigi



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2BhQ2%2BhVMVbrWJFR6MYiMy239X7KbC7LvJop664mBiayHMA-_Q>