From owner-soc-status@FreeBSD.ORG Tue Aug 9 02:34:28 2011 Return-Path: Delivered-To: soc-status@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D7B1106566B for ; Tue, 9 Aug 2011 02:34:28 +0000 (UTC) (envelope-from gnn@freebsd.org) Received: from vps.hungerhost.com (vps.hungerhost.com [216.38.53.176]) by mx1.freebsd.org (Postfix) with ESMTP id 501B18FC0A for ; Tue, 9 Aug 2011 02:34:28 +0000 (UTC) Received: from cpe-74-66-24-70.nyc.res.rr.com ([74.66.24.70] helo=[192.168.1.119]) by vps.hungerhost.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.69) (envelope-from ) id 1QqbAG-0007SQ-Cj; Mon, 08 Aug 2011 21:31:28 -0400 Mime-Version: 1.0 (Apple Message framework v1244.3) Content-Type: text/plain; charset=us-ascii From: George Neville-Neil In-Reply-To: Date: Mon, 8 Aug 2011 21:31:27 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <7FB7BCF6-5224-420D-85FA-3B82F1407E93@freebsd.org> References: To: Takuya ASADA X-Mailer: Apple Mail (2.1244.3) X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - vps.hungerhost.com X-AntiAbuse: Original Domain - freebsd.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - freebsd.org Cc: "Robert N. M. Watson" , soc-status@freebsd.org, Kazuya Goda Subject: Re: [mq_bpf] status report #9 X-BeenThere: soc-status@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Summer of Code Status Reports and Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Aug 2011 02:34:28 -0000 On Jul 27, 2011, at 19:11 , Takuya ASADA wrote: > *Project summary > The project goal is to support multiqueue network interface on BPF, > and provide interfaces for multithreaded packet processing using BPF. > Modern high performance NICs have multiple receive/send queues and RSS > feature, this allows to process packet concurrently on multiple > processors. > Main purpose of the project is to support these hardware and get > benefit of parallelism. >=20 > Here's status update from last week: > * Throughput benchmark > - Test environment > CPU: Core i7 X980 > MB: ASUS P6X58D Premium(Intel X58) > NIC: Intel Gigabit ET Dual Port Server Adapter(82576) >=20 > - Benchmark program > test_sqpbf is single threaded bpf benchmark which used only existing = bpf ioctls. > It fetch all packets from a NIC and output them on file. >=20 > test_mqbpf is multithreaded bpf benchmark which used new multiqueue = bpf ioctls. > Each thread fetch packets only from pinned queue and output them on > per thread separated file. >=20 > - Test conditions > iperf used for generate network traffic, with following argument = options > test node: iperf -s -i1 > other node: iperf -c [IP] -i1 -t 100000 -P8 > # 8 threads, TCP >=20 > tested with following 4 kernels to compare > current: GENERIC kernel on current, BPFIF_LOCK:mtx = BPFQ_LOCK:doesn't exist > mq_bpf1: RSS kernel on mp_bpf, BPFIF_LOCK:mtx BPFQ_LOCK:mtx > mq_bpf2: RSS kernel on mp_bpf, BPFIF_LOCK:mtx BPFQ_LOCK:rmlock > mq_bpf3: RSS kernel on mp_bpf, BPFIF_LOCK:rmlock BPFQ_LOCK:rmlock >=20 > - Benchmark result(MB/s) > The result is 20 times average of test_sqbpf / test_mqbpf > test_sqbpf test_mqbpf > current 26.65568315 - > mq_bpf1 24.96387975 36.608574 > mq_bpf2 27.13427415 41.76666665 > mq_bpf3 27.0958332 51.48198915 This looks good and it looks as if the performance scales linearly. = Were the test programs cpuset to each core? Is the test code in the p4 tree yet? Best, George