From owner-soc-status@FreeBSD.ORG Thu Aug 4 10:08:58 2011 Return-Path: Delivered-To: soc-status@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CBC0D106564A for ; Thu, 4 Aug 2011 10:08:58 +0000 (UTC) (envelope-from gockzy@gmail.com) Received: from mail-ew0-f54.google.com (mail-ew0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 6031E8FC1D for ; Thu, 4 Aug 2011 10:08:57 +0000 (UTC) Received: by ewy1 with SMTP id 1so1158123ewy.13 for ; Thu, 04 Aug 2011 03:08:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:from:date:message-id:subject:to:content-type; bh=DwxuR2UOC6sAAwJmzNyni+NmM4a6dN7S/ermrCB2Kvg=; b=BJxDU+eqIsf7gsVAHT7nG50wMSJSJS4DfbKH5TyWF5Qy1RabD0RiYCTAxiNs/gwNl0 EL3jzAfPgjtaT9EHybPA3Ka2vHByOpb3JXJSsnpECtZb/IayLrxWheTVO8Jd97Bd7GXo M5wSrPYKn75FdxsfFlDpavJskUTP6MtLuxQEw= Received: by 10.14.189.14 with SMTP id b14mr162199een.66.1312452537110; Thu, 04 Aug 2011 03:08:57 -0700 (PDT) MIME-Version: 1.0 Received: by 10.14.185.142 with HTTP; Thu, 4 Aug 2011 03:08:37 -0700 (PDT) From: Kazuya Goda Date: Thu, 4 Aug 2011 19:08:37 +0900 Message-ID: To: soc-status@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: [status report] RPS/RFS #week10 X-BeenThere: soc-status@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Summer of Code Status Reports and Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Aug 2011 10:08:58 -0000 Hi, The project goal is to implement RPS/RFS on FreeBSD. RPS solves the problem of mono-queue NIC which can't distribute packets across multiple processors. RFS is the extension of RPS, which deliver packets to the CPU where application running. This week status: * Research "lock" problem The case of set below : -net.isr.direct=0 -net.isr.direct_force=0 it cause to netisr(protocol stack) thread and dispatcher thread are running on same CPU. For the time begin net.isr.direct_force is set 1. * Benchmark test I used netperf. The netperf test has 50 instances of netperf TCP_RR test with 1 byte request and response. Below is result. The result SOFT_RSS is almost the same as RPS. SOFT_RSS work on more flows. I'd like to benchmark with many flows but case of many flows drop performance due to lock in protocol stack. NO RPS/SOFT_RSS - 87k tps RPS - 100k tps SOFT_RSS - 99k tps next week : * performance up --Kazuy Goda