Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 8 Aug 2011 21:48:51 -0400
From:      George Neville-Neil <gnn@freebsd.org>
To:        Takuya ASADA <syuu@dokukino.com>
Cc:        "Robert N. M. Watson" <rwatson@freebsd.org>, soc-status@freebsd.org, Kazuya Goda <gockzy@gmail.com>
Subject:   Re: [mq_bpf] status report #10
Message-ID:  <DF2856FF-5B48-4622-A22F-5E0AB0267915@freebsd.org>
In-Reply-To: <CALG4x-WHJAj2z0kHNk9NNHD8GFrLxmCULm7NBeMk_CgBpk=nXA@mail.gmail.com>
References:  <CALG4x-WHJAj2z0kHNk9NNHD8GFrLxmCULm7NBeMk_CgBpk=nXA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Aug 4, 2011, at 15:18 , Takuya ASADA wrote:

> *Project summary
> The project goal is to support multiqueue network interface on BPF,
> and provide interfaces for multithreaded packet processing using BPF.
> Modern high performance NICs have multiple receive/send queues and RSS
> feature, this allows to process packet concurrently on multiple
> processors.
> Main purpose of the project is to support these hardware and get
> benefit of parallelism.
>=20
> Here's status update from last week:
> I replaced test_mqbpf and test_sqbpf with bpfnull, from following =
repository
> //depot/projects/zcopybpf/utils/bpfnull/
>=20
> test_sqbpf is almost same as bpfnull, but added 60 sec timeout and
> throughput calculation on result information.
> =
http://p4db.freebsd.org/fileViewer.cgi?FSPC=3D//depot/projects/soc2011/mq_=
bpf/src/tools/regression/bpf/mq_bpf/test_sqbpf/test_sqbpf.c&REV=3D3
>=20
> test_mqbpf is multithreaded version of test_sqbpf, with cpu pinning
> and queue pinning.
> =
http://p4db.freebsd.org/fileViewer.cgi?FSPC=3D//depot/projects/soc2011/mq_=
bpf/src/tools/regression/bpf/mq_bpf/test_mqbpf/test_mqbpf.c&REV=3D3
>=20
> On previous benchmark I only used Intel 82576 GbE NIC, this week I
> also benchmarked with Intel 82599 10GbE NIC, added driver support of
> mq_bpf for it.
> http://p4web.freebsd.org/@@197123?ac=3D10
>=20
> I benchmarked with three conditions:
> - benchmark1 only reads bpf, doesn't write packet anywhere
> - benchmark2 writes packet on memory(mfs)
> - benchmark3 writes packet on hdd(zfs)
> - benchmark4 only reads bpf, doesn't write packet anywhere, with =
zerocopy
> - benchmark5 writes packet on memory(mfs), with zerocopy
> - benchmark6 writes packet on hdd(zfs), with zerocopy
>=20
> =46rom benchmark result, I can say the performance is increased using
> mq_bpf on 10GbE, but not on GbE.
>=20
Well, you are nearly at the bandwidth of the link on GbE.  Are those =
numbers without dropping
any packets?

Best,
George



> - Test environment
>  - FreeBSD node
>    CPU: Core i7 X980 (12 threads)
>    MB: ASUS P6X58D Premium(Intel X58)
>    NIC1: Intel Gigabit ET Dual Port Server Adapter(82576)
>    NIC2: Intel Ethernet X520-DA2 Server Adapter(82599)
>  - Linux node
>    CPU: Core 2 Quad (4 threads)
>    MB: GIGABYTE GA-G33-DS3R(Intel G33)
>    NIC1: Intel Gigabit ET Dual Port Server Adapter(82576)
>    NIC2: Intel Ethernet X520-DA2 Server Adapter(82599)
>=20
> iperf used for generate network traffic, with following argument =
options
>    - Linux node: iperf -c [IP] -i 10 -t 100000 -P12
>    - FreeBSD node: iperf -s
>    # 12 threads, TCP
>=20
> following sysctl parameter is changed
>    sysctl -w net.bpf.maxbufsize=3D1048576
>=20
> - Benchmark1
> Benchmark1 doesn't write packet anywhere using following commands
> ./test_sqbpf -i [interface] -b 1048576
> ./test_mqbpf -i [interface] -b 1048576
>    - ixgbe
>        test_mqbpf: 5303.09007533333 Mbps
>        test_sqbpf: 3959.83021733333 Mbps
>    - igb
>        test_mqbpf: 916.752133333333 Mbps
>        test_sqbpf: 917.597079 Mbps
>=20
> - Benchmark2
> Benchmark2 write packet on mfs using following commands
> mdmfs -s 10G md /mnt
> ./test_sqbpf -i [interface] -b 1048576 -w -f /mnt/test
> ./test_mqbpf -i [interface] -b 1048576 -w -f /mnt/test
>    - ixgbe
>        test_mqbpf: 1061.24890333333 Mbps
>        test_sqbpf: 204.779881 Mbps
>    - igb
>        test_mqbpf: 916.656664666667 Mbps
>        test_sqbpf: 914.378636 Mbps
>=20
> - Benchmark3
> Benchmark3 write packet on zfs(on HDD) using following commands
> ./test_sqbpf -i [interface] -b 1048576 -w -f test
> ./test_mqbpf -i [interface] -b 1048576 -w -f test
>    - ixgbe
>        test_mqbpf: 119.912253333333 Mbps
>        test_sqbpf: 101.195918 Mbps
>    - igb
>        test_mqbpf: 228.910355333333 Mbps
>        test_sqbpf: 199.639093666667 Mbps
>=20
> - Benchmark4
> Benchmark4 doesn't write packet anywhere using following commands, =
with zerocopy
> ./test_sqbpf -i [interface] -b 1048576
> ./test_mqbpf -i [interface] -b 1048576
>    - ixgbe
>        test_mqbpf: 4772.924974 Mbps
>        test_sqbpf: 3173.19967133333 Mbps
>    - igb
>        test_mqbpf: 931.217345 Mbps
>        test_sqbpf: 925.965270666667 Mbps
>=20
> - Benchmark5
> Benchmark5 write packet on mfs using following commands, with zerocopy
> mdmfs -s 10G md /mnt
> ./test_sqbpf -i [interface] -b 1048576 -w -f /mnt/test
> ./test_mqbpf -i [interface] -b 1048576 -w -f /mnt/test
>    - ixgbe
>        test_mqbpf: 306.902822333333 Mbps
>        test_sqbpf: 317.605016666667 Mbps
>    - igb
>        test_mqbpf: 729.075349666667 Mbps
>        test_sqbpf: 708.987822666667 Mbps
>=20
> - Benchmark6
> Benchmark6 write packet on zfs(on HDD) using following commands, with =
zerocopy
> ./test_sqbpf -i [interface] -b 1048576 -w -f test
> ./test_mqbpf -i [interface] -b 1048576 -w -f test
>    - ixgbe
>        test_mqbpf: 174.016136666667 Mbps
>        test_sqbpf: 138.068732666667 Mbps
>    - igb
>        test_mqbpf: 228.794880333333 Mbps
>        test_sqbpf: 229.367386333333 Mbps




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?DF2856FF-5B48-4622-A22F-5E0AB0267915>