From owner-freebsd-hackers@FreeBSD.ORG Sun Sep 22 22:06:04 2013 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id ACEF0C12; Sun, 22 Sep 2013 22:06:04 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) by mx1.freebsd.org (Postfix) with ESMTP id 68C9E29BF; Sun, 22 Sep 2013 22:06:04 +0000 (UTC) Received: from slw by zxy.spb.ru with local (Exim 4.69 (FreeBSD)) (envelope-from ) id 1VNrp0-000Llg-HY; Mon, 23 Sep 2013 02:08:06 +0400 Date: Mon, 23 Sep 2013 02:08:06 +0400 From: Slawa Olhovchenkov To: "Alexander V. Chernikov" Subject: Re: Network stack changes Message-ID: <20130922220806.GK3796@zxy.spb.ru> References: <521E41CB.30700@yandex-team.ru> <521E78B0.6080709@freebsd.org> <20130829013241.GB70584@zxy.spb.ru> <523F4C8D.6080903@yandex-team.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <523F4C8D.6080903@yandex-team.ru> User-Agent: Mutt/1.5.21 (2010-09-15) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-Mailman-Approved-At: Sun, 22 Sep 2013 22:25:40 +0000 Cc: adrian@freebsd.org, Andre Oppermann , freebsd-hackers@freebsd.org, freebsd-arch@freebsd.org, luigi@freebsd.org, ae@FreeBSD.org, Gleb Smirnoff , FreeBSD Net X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 22 Sep 2013 22:06:04 -0000 On Mon, Sep 23, 2013 at 12:01:17AM +0400, Alexander V. Chernikov wrote: > On 29.08.2013 05:32, Slawa Olhovchenkov wrote: > > On Thu, Aug 29, 2013 at 12:24:48AM +0200, Andre Oppermann wrote: > > > >>> .. > >>> while Intel DPDK claims 80MPPS (and 6windgate talks about 160 or so) on the same-class hardware and > >>> _userland_ forwarding. > >> Those numbers sound a bit far out. Maybe if the packet isn't touched > >> or looked at at all in a pure netmap interface to interface bridging > >> scenario. I don't believe these numbers. > > 80*64*8 = 40.960 Gb/s > > May be DCA? And use CPU with 40 PCIe lane and 4 memory chanell. > Intel introduces DDIO instead of DCA: > http://www.intel.com/content/www/us/en/io/direct-data-i-o.html > (and it seems DCA does not help much): > https://www.myricom.com/software/myri10ge/790-how-do-i-enable-intel-direct-cache-access-dca-with-the-linux-myri10ge-driver.html > https://www.myricom.com/software/myri10ge/783-how-do-i-get-the-best-performance-with-my-myri-10g-network-adapters-on-a-host-that-supports-intel-data-direct-i-o-ddio.html > > (However, DPDK paper notes DDIO is of signifficant helpers) Ha, Intel paper say SMT is signifficant better HT. In real word -- same shit. For network application, if buffring need more then L3 cache, what happening? May be some bad things...