From owner-soc-status@FreeBSD.ORG Wed Jun 29 14:26:06 2011 Return-Path: Delivered-To: soc-status@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 182D3106566C for ; Wed, 29 Jun 2011 14:26:06 +0000 (UTC) (envelope-from gockzy@gmail.com) Received: from mail-ew0-f54.google.com (mail-ew0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id A42698FC15 for ; Wed, 29 Jun 2011 14:26:05 +0000 (UTC) Received: by ewy1 with SMTP id 1so641292ewy.13 for ; Wed, 29 Jun 2011 07:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:from:date:message-id:subject:to:content-type; bh=FNajsfRtxddLM7AWc4KjS/gwf3FstpEZol/+Af47Kww=; b=VilyFEpUaTcqO8SjGKC/Pt9EP2G6AvMm/UelqF6oHBGvyKzF3tL/Mh1DsM7U+z2WAr o5dNSjd12WbQviXEUWcPGjrekp9eGgAkvi4KT2eGaMBlSmoanSm0FJXr7Tj7rqLXVurf dV5+geOFB2ZNF0BGWxdhPK93kkIlLjDI93hIg= Received: by 10.213.112.141 with SMTP id w13mr250239ebp.131.1309357564413; Wed, 29 Jun 2011 07:26:04 -0700 (PDT) MIME-Version: 1.0 Received: by 10.213.102.5 with HTTP; Wed, 29 Jun 2011 07:25:44 -0700 (PDT) From: Kazuya Goda Date: Wed, 29 Jun 2011 23:25:44 +0900 Message-ID: To: soc-status@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: [status report] RPS/RFS #week5 X-BeenThere: soc-status@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Summer of Code Status Reports and Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 14:26:06 -0000 Hi, The project goal is to implement RPS/RFS on FreeBSD. RPS solves the problem of mono-queue NIC which can't distribute packets across multiple processors. RFS is the extension of RPS, which deliver packets to the CPU where application running. This week status: * Implements RFS ** Data Structures I added two global tables and a entry in struct sockbuf for RFS. These tables are "socket_flow_table" and "netisr_flow_table", and entry is flowid. [ socket_flow_table ] Structure is below: unsigned socket_flow_table[SOCKET_FLOW_ENTS]; This table is populated by the recvmsg() call with the CPU ID where the application is running. This value is called "dst cpu". - operating functions +record_dstcpu() : recorded CPU ID(dst cpu) in soreceive() +get_flow_dstcpu() : get "dst cpu" for that flow [ netisr_flow_table ] Structure is below: struct netisr_flow{ uint16_t cpu; unsigned last_qtail; }; struct netisr_flow netisr_flow_table[NETISR_FLOW_ENTS]; This table contains the most recent CPU used to handle packets for that connection. This value is called "cur cpu". - operating functions + record_curcpu() : record CPU ID(cur cpu) + get_flow_curcpu() : get "cur cpu" for that flow + inc_flow_queue() : increment netisr_flow_table[index].last_qtail + dec_flow_queue() : decrement netisr_flow_table[index].last_qtail + get_flow_queue() : return netisr_flow_table[index].last_qtail [ entry flowid ] I added "uint32_t flowid" in struct sockbuf. This entry is populated by the tcp_input() call with the m->m_pkthdr.flowid. ** Select CPU The two CPU values(dst cpu, cur cpu) are compared when deciding which CPU to process the packet on. The case of "cur cpu" is unset, "dst cpu" is used. The case of two CPU values are the same, that CPU is used. But if they are both valid CPU ID, but different, the last_qtail is consulted. If last_qtail is 0 , "cur cpu" is used. Other case, "dst cpu" is used. next week: * complete implements RFS Regards, Kazuya Goda